CN103596004A - Intra-frame prediction method and device based on mathematical statistics and classification training in HEVC - Google Patents

Intra-frame prediction method and device based on mathematical statistics and classification training in HEVC Download PDF

Info

Publication number
CN103596004A
CN103596004A CN201310581162.4A CN201310581162A CN103596004A CN 103596004 A CN103596004 A CN 103596004A CN 201310581162 A CN201310581162 A CN 201310581162A CN 103596004 A CN103596004 A CN 103596004A
Authority
CN
China
Prior art keywords
predicting unit
unit piece
piece
predictive mode
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310581162.4A
Other languages
Chinese (zh)
Inventor
魏芳
黄慧明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201310581162.4A priority Critical patent/CN103596004A/en
Publication of CN103596004A publication Critical patent/CN103596004A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses an intra-frame prediction method and device based on mathematical statistics and classification training in HEVC. The method comprises the following steps that 1, coding parameters are set; 2, training sample data are obtained; 3, PU blocks of a DC mode are classified; 4, reference pixel sample sets and original pixel sample sets are obtained from original frames and serve as training samples; 5, sample training is carried out. Compared with the prior art, the intra-frame prediction method and device based on mathematical statistics and classification training in the HEVC have the advantages that the change distribution of image textures are paid more attention to, the prediction precision of the DC mode and a PLANAR mode is improved, a better subjective effect is achieved at the detailed position of an image background, and at the same time, the proportion of the DC mode and the PLANAR mode is increased.

Description

Intra-frame prediction method based on mathematical statistics and classification based training in HEVC and device
Technical field
The present invention relates to high-performance video coding (HEVC, High Efficiency Video Coding) field, more specifically, relate to the intra-frame prediction method based on mathematical statistics and classification based training and device in high-performance video coding.
Background technology
Due to H.264/MPEG-4AVC issue in 03 year, through 9 years, so along with the improving constantly of network technology and terminal processing capacity, people to now widely used MPEG-2, MPEG-4, H.264 etc., new requirement has been proposed.Wishing to provide: 1) high definition, 2) 3D, 3) mobile wireless, to meet the application of the frontiers such as new home theater, remote monitoring, digital broadcasting, mobile flow medium, mobile imaging, medical imaging.In addition, H.264/AVC, after issue, the accumulation (development of the technology such as Novel sports compensation, conversion, interpolation and entropy coding) through in a few years, has possessed the technical foundation of releasing video encoding standard of new generation.
Held the JCT-VT first session at Dresden, Germany in April, 2010, determines video encoding standard title of new generation: HEVC(High Efficiency Video Coding), and set up test model.HEVC is combined the video items of establishment by ITU-T VCEG and ISO/IEC mpeg standard tissue.The HEVC standard of front page is published in January, 2013.At present, working group is planning extra work existing HEVC standard is being expanded, and comprises professional purpose, higher accuracy and the support of color format, scalable coding, 3D/ solid/multi-vision-point encoding etc.HEVC has been designed to solve all existing application demand H.264/MPEG-4AVC in essence, and ensuing focus concentrates on two key issues especially: improve the resolution of video and increase the parallel processing framework using.
The design of HEVC is exactly in order to process all application demands of H.264/MPEG-4AVC producing.HEVC mainly pays close attention to two crucial subjects under discussion: promote video resolution and carry out parallel processing framework.The grammer of HEVC has versatility, makes it can be suitable for various application.It is standardized that HEVC only has bit stream structure and grammer, and the mapping relations to the image of the restriction of bit stream and bit stream and the generation of decoding.This mapping is to realize by the semanteme of grammer and the definition of decode procedure, makes so long as meet the decoder of standard-required, and under the given bit stream condition that meets standard constraint, decoded result is all the same.The restriction of this critical field allows the freedom of maximum degree to make to be applicable to specific application (balance compression quality, implementation cost etc.) with optimization implementation.The core objective of a new generation's video compression standard is that compression efficiency doubles on the basis of high profile H.264/AVC.Guaranteeing that, under the prerequisite of same video picture quality, the code check of video flowing reduces 50%.When improving compression efficiency, can allow coding side suitably to improve complexity.
Infra-frame prediction is an important coding tools for I frame and P/B frame, but makes to be used less in P/B frame.Institute of the present invention extracting method is directed to the improvement of I frame infra-frame prediction in HEVC.I frame is a kind of independently frame, and is the first frame of each GOP.It is intended for the starting point of new focus or re-synchronization point, and can be used for realizing the ability of the random access of F.F., rewinding etc., can not produce yet significantly fuzzy, and visual quality that can augmented video.Along with the fast development of hardware, I frame is conducive to the parallel processing of multi-core multithreading.I frame is a kind of frame very important in Video coding.
I frame is the key frame in Video coding, and infra-frame prediction is very important to the coding of I frame, and prediction has directly affected the performance of quantification, conversion and entropy coding.HEVC still adopts traditional hybrid coding structure, obtains very unitary Item gain and mainly ascribes the technology that HEVC has adopted a lot of novelties to.Wherein in frame, part is the infra-frame prediction based on spatial domain, HEVC is 33 directions within the scope of 180 ° by prediction angle spread, and adopt piece flexibly to divide, basic unit block is divided into CU (coding unit, coding unit), TU (converter unit, transform units) and PU (predicting unit, prediction unit), the magnitude range of piece expands to 64 * 64 to 4 * 4.Yet the method for reference pixel mean value is asked in the utilization of existing DC predictive mode in HEVC, can not there is to all pieces good prediction effect.In addition, also some is coarse for the method that PLANAR predictive mode utilizes gradient interpolation, has affected estimated performance.Therefore,, for the highest predictive mode of these two kinds of average probability of use (DC predictive mode and PLANAR predictive mode), still there is improved space.
Summary of the invention
Present inventor considers the above-mentioned situation of prior art and has made the present invention.Method of the present invention is absorbed in the improvement of part Forecasting Methodology in HEVC I frame frame.The present invention proposes the intra-frame prediction method based on mathematical statistics and classification based training, in order to the above-mentioned two kinds of predictive mode DC in training and improvement HEVC coding and the prediction weight coefficient of PLANAR.The present invention has improved the usage rate of DC and two kinds of patterns of PLANAR, has reduced code check in the situation that of identical PSNR, and the subjective quality of video image is had to certain improvement.
Therefore the technical problem to be solved in the present invention: in HEVC, DC and the existing Forecasting Methodology of PLANAR pattern are fairly simple, with respect to angle predictive mode, these two kinds of patterns all cause predicting the outcome and have a larger predicated error, especially in the position, the lower right corner of PU piece.
According to embodiments of the invention, a kind of intra-frame prediction method for high-performance video coding is provided, comprise the following steps: step 1, input video sequence data, wherein, described video sequence data is comprised of the brightness value of each pixel of frame of video, described frame of video is divided into a plurality of coding unit pieces, coding unit piece is by further for dividing predicting unit piece, and wherein, a plurality of neighbors in this frame of video of each predicting unit piece are as the reference pixel of this predicting unit piece; Step 2, utilize the multiple standards intra prediction mode of high-performance video coding, respectively each predicting unit piece of described frame of video is carried out to infra-frame prediction, and, for each predicting unit piece, determine that respectively a predictive mode of rate distortion costs minimum in described multiple predictive mode is as optimum prediction mode; Step 3, calculate optimum prediction mode and be the variance yields of reference pixel of each predicting unit piece of the first predictive mode in multiple standards intra prediction mode, the described variance yields of each predicting unit piece calculating is compared with predetermined variance threshold values, and, according to comparative result, the predicting unit piece that is the first predictive mode by described optimum prediction mode is divided into many group predicting unit pieces; Step 4, every group of predicting unit piece in described many group predicting unit pieces, is used least square method, to equation
Figure BDA0000417017820000031
solve, obtain the weight estimation coefficient matrix of each location of pixels in each the predicting unit piece in this group predicting unit piece
Figure BDA0000417017820000032
its size is n * 1, wherein, for m * n matrix that this brightness value of organizing all reference pixels of predicting unit piece forms, wherein n is the number of the reference pixel of each predicting unit piece, and m is the predicting unit piece number in this group predicting unit piece,
Figure BDA0000417017820000034
for the object pixel dot matrix that the brightness value of each pixel in each predicting unit piece forms, its size is m * 1, and i, j represent respectively the coordinate in the vertical and horizontal direction of this pixel in predicting unit piece.
According to embodiments of the invention, described intra-frame prediction method is further comprising the steps of: step 5, and continue input video sequence data, and the video sequence data of new input is carried out to the operation in step 2 and 3, obtain many group predicting unit pieces; Step 6, utilizes described weight estimation coefficient matrix
Figure BDA0000417017820000041
brightness value to the reference pixel of each the predicting unit piece in every group of predicting unit piece in many groups predicting unit piece is weighted addition, and the result of weighting summation is as the prediction brightness value of the pixel of the relevant position (i, j) in described each predicting unit piece.
According to embodiments of the invention, provide a kind of for carrying out the infra-frame prediction device of the intra-frame prediction method as described in of claim 2 to 10, comprise HEVC coding unit module, HEVC predicting unit module and sample training module, wherein, described HEVC coding unit module is carried out quad-tree partition to the maximum coding unit of frame of video, maximum coding unit is divided into a plurality of coding unit pieces, described HEVC coding unit module is proceeded quad-tree partition to coding unit piece, each coding unit piece is divided into a predicting unit piece in current division layer, at next, divide in layer and be divided into a plurality of predicting unit pieces, described HEVC predicting unit module is passed through infra-frame prediction in predicting unit piece, quantize, conversion, pseudo-entropy encoding operation, obtain code check and rate distortion costs under the current predictive mode of current predicting unit piece, more current code check and rate distortion costs of dividing layer and next division layer, according to comparative result, judge and adopt current division layer or the predicting unit piece division of next division layer, described sample training module comprises sample acquisition unit and sample training unit, described sample acquisition unit is used for: when carrying out predictive coding under DC and PLANAR predictive mode, from video sequence data, obtain the brightness value of each pixel in each predicting unit piece, the brightness value of the reference pixel of each predicting unit piece, and provide it to sample training unit, described sample training unit is used for carrying out described step 3 to 6.
Advantage beneficial effect major embodiment of the present invention is in the following areas: by adopting methods and apparatus according to embodiments of the present invention, compared with prior art, more pay attention to the change profile of image texture, improved the precision of prediction of DC and PLANAR pattern, in image background details position, there is better subjective effect, increased the ratio of DC and PLANAR predictive mode simultaneously.
Accompanying drawing explanation
Fig. 1 is the location diagram of pixel and reference pixel in 4 * 4PU piece according to an embodiment of the invention.
Fig. 2 be according to an embodiment of the invention in Vidyo1 sequence D C predictive mode 4 * 4PU piece the standard deviation of pixel and reference pixel be divided into the pie chart of five classes.
Fig. 3 is the training flow chart of intra-frame prediction method according to an embodiment of the invention.
Embodiment
Below in conjunction with accompanying drawing, the enforcement of technical scheme is described in further detail.
It should be noted that, in the following description, take DC or PLANAR predictive mode is example, and the principle of embodiments of the invention is described.Yet, those skilled in the art will appreciate that the content of the relevant DC predictive mode of describing with reference to the accompanying drawings can be applied to PLANAR predictive mode similarly because PLANAR predictive mode and DC predictive mode are similarly in principle, vice versa.
In addition, it will be appreciated by those skilled in the art that, within not deviating from the scope of principle of the present invention and spirit, embodiments of the invention also can be applied to other various predictive modes except DC and PLANAR predictive mode existing and that may occur in the future.
First, summarize technical scheme according to an embodiment of the invention.
According to embodiments of the invention, the described infra-frame prediction device based on mathematical statistics and classification based training mainly comprises HEVC coding unit module, HEVC predicting unit module and sample training module.
In HEVC, one two field picture is divided into a plurality of independently LCU(largest coding unit), described HEVC coding unit module is by carrying out quad-tree partition to LCU, LCU is divided into a plurality of CU(coding unit), CU is the minimum unit of intraframe coding, on its basis the residual error after quantizing is carried out to entropy coding.For example, the size of CU can be 8 * 8,16 * 16,32 * 32,64 * 64.
Described HEVC coding unit module is carried out quad-tree partition to CU, be divided into one or more PU, for example, current C U size is 8 * 8, and the PU in CU divides and may have for situation: current a divisions layer of 8 * 8PU() or 44 * 4PU(next divide layer).Described HEVC predicting unit module obtains the code check of current PU prediction and the estimated value (rate distortion costs of distortion by steps such as infra-frame prediction, quantification, conversion, pseudo-entropy codings in predicting unit (PU), Rate Distortion cost), more current layer and the code check of next division layer and the estimated value of distortion of dividing, judges whether to divide this PU piece.By predicting unit (PU), divide, according to rate distortion costs optimization, select adaptively best predicting unit division result, and then obtain the division result of CU.
Wherein, the intra prediction mode of described HEVC predicting unit module comprises angle predictive mode, DC predictive mode and PLANAR predictive mode.
Described angle predictive mode is for selecting a kind of angle direction to predict image detail content compared with the region of horn of plenty, be included in the careful angle prediction that is divided into 33 directions in 180 degree, to being tending towards the image-region of direction distribution, there is good prediction effect, can effectively reduce prediction residual.
Described DC predictive mode is a kind of pattern of predicting by calculating the method for PU reference pixel mean value, is applicable to comparatively smooth image-region.
Described PLANAR predictive mode is a kind of pattern of predicting by calculating the method for the gradation zone interpolation evaluation of PU reference pixel, is applicable to the image-region of gradual change trend.
Wherein, described sample training module comprises sample acquisition unit and sample training unit.
Described sample acquisition unit is used for: when under DC and PLANAR predictive mode, normal use HEVC encoding software carries out predictive coding, from a plurality of training video sequences, obtain respectively different sizes (4 * 4, 8 * 8) brightness value of PU piece pixel in the original video frame of piece, the brightness value of the reference pixel of correspondence position in original video frame, the data that obtain are stored in respectively in different files under DC predictive mode and under PLANAR predictive mode, for the data that obtain under same pattern (, the brightness value of above-mentioned two groups of pixels (PU piece pixel and reference pixel)), size according to its corresponding piece is stored respectively, for sample training unit.
Described sample training unit, for the brightness value of the original video frame PU piece pixel of the acquisition of normal encoding under DC predictive mode is carried out to variance counting statistics, distributes to obtain whole sample variance.
Described sample training unit is also for carrying out in the horizontal and vertical directions Grads threshold counting statistics to the brightness value of the original video frame reference pixel of the acquisition of normal encoding under PLANAR predictive mode.
Also classify for the sample evidence primitive frame original pixels variance threshold values scope to obtaining under DC predictive mode in described sample training unit, in the sample group of difference classification, the corresponding reference pixel obtaining in the sample point use primitive frame of diverse location in PU piece is weighted to the training of coefficient, to obtain the optimum weight estimation coefficient for DC pattern of each location of pixels in PU piece.
Described sample training module distributes and carries out sample classification for the reference pixel Grads threshold that the sample evidence primitive frame obtaining under PLANAR predictive mode is obtained, in the sample group of difference classification, the corresponding reference pixel obtaining in the sample point use primitive frame of diverse location in PU piece is weighted to the training of coefficient, to obtain the optimum weight estimation coefficient for PLANAR pattern of each location of pixels in PU piece.
In HEVC, the recurrence of CU is divided with the recurrence of PU and is divided and combine closely, first carry out the division of CU aspect, the division of CU is again to obtain according to pre-measured compressed and the rate-distortion optimization of PU, while predicting in PU unit conducting frame, by traveling through angle prediction, DC prediction and the PLANAR of above-mentioned 33 directions, predict, obtain best predictive mode.
Further, described infra-frame prediction device has following characteristics: off-line obtains training sample, reduces the algorithm complex likely increasing in actual coding process, and when training coefficient for improvement of DC pattern and PLANAR pattern, predicted time remains unchanged substantially.
Further, described infra-frame prediction device has following characteristics: by mathematical statistics, sample is analyzed to classification, on the basis of classification, carry out sample training and obtain optimum prediction weight coefficient, more careful picture material details and the variation tendency of catching, to obtain best prediction effect.
In order better to carry out the infra-frame prediction of DC and PLANAR predictive mode, first video image content is carried out to mathematical statistics analysis.Under DC predictive mode, the method for embodiments of the invention is only used left reference pixel r (9)~r (12) and upper reference pixel r (1)~r (4) as shown in Figure 1.Correlation analysis refers to that the variable element that two or more are possessed to correlation analyzes, thereby weighs the related intimate degree of two Variable Factors.By calculating the correlation between each location point pixel in each reference pixel and 4 * 4, can find that correlation and position relationship have closely contacts, be that the original pixels of diverse location and the reference pixel of diverse location have the degree of correlation apart from property, piece is interior along the direction from the upper left corner to the lower right corner, therefore correlation weakens, and predicated error increases.4 * 4 the prediction piece of take is example, the correlation maximum of r (1) and r (9) in pixel s (1,1) and 8 reference pixels in piece, it is 0.91 left and right, and pixel s (4,4) is more or less the same with the correlation of 8 reference pixels, float in 0.74 left and right.As can be seen here, the method for existing DC and PLANAR predictive mode can not be predicted all pixels in polylith well.
Due to the uncertainty of pixel distribution, there is some difference in the distribution of different sample block and reference pixel, and the training of directly all samples being carried out to original PU piece and its reference pixel can not obtain the optimum prediction coefficient changing according to picture material.Therefore need further video image pixel to be analyzed.
Variance and standard deviation are most important, the most frequently used indexs of the discrete trend of measuring and calculating.Variance is the average of each variable and its deviation from mean square, is the important method of weighing sample fluctuation size.Optimal mode is that the piece sample distribution of DC it seems it is more balanced substantially, but for the finer picture material distribution characteristics of catching, the standard that the method is used variance to distribute as further observed data.Pearson's coefficient correlation (Pearson correlation coefficient) also claims Pearson's product moment correlation coefficient (Pearson product-moment correlation coefficient), is a kind of linearly dependent coefficient.Pearson's coefficient correlation is for reflecting the statistic of two linear variable displacement degrees of correlation.The method of embodiments of the invention is weighed the variance of reference pixel and the degree of correlation of original pixels variance by Pearson's coefficient correlation.By the variance of computing reference pixel and Pearson's coefficient correlation of original pixels variance, can weigh this two variable fluctuation correlations by variance.For example, Pearson's phase relation numerical value of the variance yields of PU piece in a training sequence sample and the variance yields of its reference pixel is 0.84907, due to Pearson's phase relation numerical value of two variablees (variance yields of PU piece and the variance yields of its reference pixel), more to approach its correlation of 1 explanation larger, so, in the case, can estimate with the variance yields of reference pixel the fluctuation situation of the pixel value of the PU piece that it is corresponding.
As shown in Figure 2, the variance histogram of original pixels and reference pixel shows, distributes close to Gaussian Profile, and peak point left side is comparatively precipitous, and there is hangover characteristic on right side.As shown in the drawing, peak point drops in 1~2 scope, and this meets the applicable smooth feature of piece content of DC pattern.And for the sample on peak point right side, sample variance is larger, illustrates that sample place image-region content difference is large, needs to obtain more excellent prediction weight coefficient by training.According to the histogrammic hangover feature of variance, each PU piece of data sample is classified as follows: D1(flat site, variance yields (variance yields of PU piece reference pixel) is 0~1); The micro-fuctuation within a narrow range of D2(region, variance yields is 1~2.5); D3(fuctuation within a narrow range region, variance yields is 2.5~4); D4(larger difference region, variance yields is 4~10); The uneven region of D5(, variance yields is 10~40).For example, very smooth division D1 accounts for greatly about 25%-35% of all samples, remains different divisions and still occupies very large ratio, and therefore in not all PU piece that has adopted DC predictive mode, pixel is all suitable for the prediction mode of calculating mean value.
For PLANAR pattern, the object of primal algorithm is the prediction for gradient region.Calculate the difference between upper left point and top-right point (or the lower left corner and upper left point), during two tap interpolations, carry out the compensation of a gradual change degree.In order more effectively to train optimal mode, be the sample of PLANAR, embodiments of the invention carry out comparatively careful gradient calculation according to Gradient Features.Particularly, for example, by (calculating neighbor, reference pixel r (the 9)~r (12) on the left side, reference pixel r (the 1)~r (4) of top, the difference between as shown in Figure 1), obtains the gradient set of the horizontal and vertical direction of reference pixel.According to the calculating of the gradient in above horizontal direction and vertical direction (Grad in horizontal direction and vertical direction positive and negative), each PU piece of the data sample obtaining is divided into 5 types: P1(reference pixel vertical direction increases progressively horizontal direction and increases progressively); The P2(reference pixel vertical direction horizontal direction of successively decreasing increases progressively); P3(reference pixel vertical direction increases progressively horizontal direction and successively decreases); The P4(reference pixel vertical direction horizontal direction of successively decreasing increases progressively); P5(other).
According to the above-mentioned classification to DC and PLANAR pattern sample, in same classification range, take out successively a pixel (pixel that has 16 diverse locations in 4 * 4 in PU piece in primitive frame, in 8 * 8, there is the pixel of 64 diverse locations), forming original pixels matrix (for example, can be expressed as
Figure BDA0000417017820000093
).And then take out the reference pixel corresponding to PU that previous step is got, form reference pixel matrix and (for example, can be expressed as
Figure BDA0000417017820000094
), can be according to formula
Figure BDA0000417017820000091
wherein
Figure BDA0000417017820000092
be one group of weighting coefficient matrix, calculate the optimum weighting coefficient (in the following description, above-mentioned training process being described in more detail) of piece (4 * 4 and 8 * 8) diverse location pixel in each grouping.
The flow process of the method for the embodiment of the present invention is described below.Fig. 3 is the training flow chart of intra-frame prediction method according to an embodiment of the invention.It should be noted that, Fig. 3 only, for being used for illustrating the schematic diagram of the principle of embodiments of the invention, does not form the restriction to embodiments of the invention, and the operation in each frame in Fig. 3 must be not corresponding with each step of describing in this manual yet.
The training flow process of intra-frame prediction method mainly comprises the following steps according to an embodiment of the invention:
Step 1, the free reference software HM8.0 of use HEVC, arranges coding parameter.Wherein, as example, configuration file is selected encoder_intra_main.cfg, the FramesToBeEncoded(frame number of encoding) be made as 16, the maximum CU width of MaxCUWidth() be made as the maximum CU height of 32, MaxCUHeight() be made as 32, QP(quantization parameter) be made as 7.Select less QP value, coding result is the high PSNR(Peak Signal of high code check to Noise Ratio, Y-PSNR), video image compression rate is less, but decompress(ion) video quality is good, data distortion is little, and the CU of coding result divides and selects the ratio of fritter large simultaneously, the amount of training data abundant (the present invention only chooses 4 * 4 and 8 * 8 PU piece as training sample) that can obtain.
Step 2, obtains training sample data.HM8.0 encoding software starts to carry out the compressed encoding of video after input configuration file, and image frame data, division information and model selection information while encoding in order to obtain need to be added the code that is used for exporting data in HM8.0 program.Next, each PU piece of the frame of video of training video sequence is carried out to infra-frame prediction.First, 35 kinds of intra prediction modes are traveled through, calculate every kind of intra prediction mode predicated error absolute value and, predicated error absolute value and minimum a plurality of patterns are formed to candidate pattern group, then, travel through again all patterns in described candidate pattern group, under each pattern in described candidate pattern group, predict respectively, convert, pseudo-entropy coding, the rate distortion costs (RD cost) of calculating each pattern, the pattern of selection rate distortion cost minimum is as the intra prediction mode of current PU piece.If the size that the optimum prediction mode of current PU is DC and PU is 4 * 4, export the reference pixel brightness value (left reference pixel r (9)~r (12) shown in Fig. 1 and upper reference pixel r (1)~r (4)) of correspondence position piece in the original pixels brightness value of current block in primitive frame (4 * 4) and primitive frame, for example, output to (also printable above-mentioned data) in a file.For the situation of 4 * 4PU piece and 8 * 8PU piece under 8 * 8PU piece under DC pattern (reference pixel of 8 * 8PU piece comprises 8 reference pixel values of piece top and 8 reference pixel values of left) and PLANAR pattern, also carry out the operation of above-mentioned output data.This step obtains a DC under QP value and totally 4 data files (corresponding to two kinds of sizes of the PU piece under two patterns) of PLANAR pattern, it includes two groups of data separately, is respectively reference pixel brightness value in original pixels brightness value in primitive frame and primitive frame.
Step 3, the classification of the PU piece of DC pattern.In order to make training there is the versatility under different Q P value, and avoid causing training error due to the distortion that conversion, quantification, inverse quantization and inverse transformation cause, in this step, use the reference pixel sample group that obtains in primitive frame and original pixels sample group as training sample.To each sample block obtaining under DC pattern (4 * 4 or 8 * 8) and its corresponding reference pixel, (4 * 4 piece has 16 pixels to calculate respectively sample block, 8 * 8 piece has 64 pixels) and reference pixel (4 * 4 piece has 8 reference pixels, 8 * 8 piece has 16 reference pixels) standard deviation, can estimate according to the variance of original block pixel and reference pixel variance the correlation of original block pixel and reference pixel, according to mentioned above five kinds of scope D1~D5, the data based reference pixel standard deviation of sample block and its corresponding reference pixel is divided into corresponding five groups (five classification), every group corresponding to one of above-mentioned classification (in scope D1~D5 one).Here, it should be noted that, the division of above-mentioned five kinds of scope D1~D5 is only for being used for illustrating the example of principle of the present invention, those skilled in the art can adjust variance threshold values according to concrete situation, and can adopt the classification of other number, that is, the number of above-mentioned scope is not limited to above-mentioned five kinds.
Step 4, is used in this step and in primitive frame, obtains reference pixel sample group and original pixels sample group as training sample.To each 4 * 4 sample block obtaining under PLANAR pattern and its corresponding reference pixel, Grad between difference calculated level direction and the reference pixel of vertical direction, each piece of 8 * 8 is also carried out to above-mentioned calculation procedure, according to mentioned above five kinds of scope P1~P5, by obtaining in 4 * 4 and 8 * 8 primitive frame, reference pixel sample group and original pixels sample evidence obtain corresponding horizontal direction reference pixel Grad and horizontal direction reference pixel Grad is divided into corresponding five groups respectively, every group corresponding to one of above-mentioned classification (in scope P1~P5 one).Here, it should be noted that, the division of above-mentioned five kinds of scope P1~P5 is only for being used for illustrating the example of principle of the present invention, those skilled in the art can adjust Grads threshold according to concrete situation, and can adopt the classification of other number, that is, the number of above-mentioned scope is not limited to above-mentioned five kinds.
Step 5, this step is carried out sample training, and for example, the training sample data of 4 * 4PU piece under DC pattern, comprise in primitive frame reference pixel brightness value in PU piece pixel brightness value and primitive frame.
Particularly, for first group in the described five groups of sample datas that obtain in step 3 or 4, the number of supposing the PU piece in this group is m, the brightness value of all pixels separately of m PU piece in this group in all primitive frames is formed to the matrix of 4m * 4, all reference pixels (for example, corresponding 8 reference pixels of PU piece of one 4 * 4, like this, if there is the PU piece (m=1000) of 1000 4 * 4 in first group, its reference pixel matrix (vector) of corresponding 1000 8 * 1) brightness value forms a m * 8(and supposes that each pixel (the PU piece under this pixel) has 8 reference pixels) matrix
Figure BDA0000417017820000111
from the matrix of above-mentioned 4m * 4 of PU piece, take out successively the brightness value (brightness value of m pixel is taken out in each position) of the pixel of each position (" position " here refers to the relative position of this pixel in PU piece) in each PU piece, as the position s (1 in Fig. 1,1), the brightness value of the pixel of each position forms an object pixel dot matrix (vector) separately
Figure BDA0000417017820000112
size is m * 1.For each pixel in the PU piece of 4 * 4 sizes, if it has 8 reference pixels, should there be 8 weight estimation coefficient values that correspond respectively to 8 reference pixels.Therefore the weight estimation coefficient matrix (vector), forming for 8 weight estimation coefficient values of each pixel
Figure BDA0000417017820000113
size is 8 * 1.Use least square method pair
Figure BDA0000417017820000114
solve, can obtain the weight estimation coefficient matrix (vector) of each pixel (each s (i, j)) in the PU piece of each 4 * 4 size in this group i, j represent that respectively (pixel of the same position separately of m PU piece in same classification (group) has identical weight estimation coefficient matrix in the position (coordinate) vertically and in horizontal direction of this pixel in PU piece, that is, the m of a same position pixel shares same weight estimation coefficient matrix).
Other four groups of sample datas to 4 * 4PU piece under DC pattern repeat said process, finally obtain five groups of weight coefficient groups of classifying according to original PU piece variance threshold values of 4 * 4PU piece under DC pattern, and every group of weight coefficient group comprises 16 weight estimation coefficient matrixes.
In like manner, for 8 * 8PU piece under DC pattern, also carry out similar training process with 4 * 4PU piece under PLANAR pattern with 8 * 8PU piece.
By above step, obtain for improvement of DC predictive mode and the weight estimation coefficient of PLANAR predictive mode (for above-mentioned two patterns, according to PU block size 4 * 4 and 8 * 8, train respectively), each location of pixels in the PU piece of each classification under each PU block size, respectively there is one group of weight estimation coefficient, when making conducting frame intraprediction encoding, according to the characteristic of picture material, when DC predicts, piece for 4 * 4 and 8 * 8, first calculate its reference pixel variance yields, the weight coefficient of respective pixel position in selection sort, be weighted prediction and (pass through weight coefficient, the brightness value of reference pixel is weighted to addition).Similarly, when PLANAR predicts, the piece for 4 * 4 and 8 * 8, first calculate its reference pixel gradient situation, the weight coefficient of respective pixel position in selection sort, is weighted prediction (by weight coefficient, the brightness value of reference pixel being weighted to addition).That is, select adaptively predictive mode and corresponding weight estimation coefficient, improve precision of prediction, reduce prediction residual.
Afterwards, be weighted the application of coefficient.The weight coefficient group of respectively organizing of each PU block size corresponding under DC and PLANAR pattern is joined in HM program in DC and PLANAR predictive mode code block.Like this, HM program is when operation coding, if current PU piece is used DC or PLANAR model prediction (optimum prediction mode is DC or PLANAR), according to the size of this PU piece and classification (determining in above-mentioned training process), choice for use carries out weight estimation in frame corresponding to the weight coefficient group of this size and classification 16 or 64 weight vector of each location of pixels (that is, corresponding to).Further, said method also can have following characteristics: in step 1, select the conventional cycle tests of HEVC to form training set and make adopted algorithm have certain versatility, be applicable to most of video sequence, and the video sequence of selecting large-size for example (720p, 1080p) meet the demand of present people to HD video sequence.
In HEVC, the design of angle prediction has reached the interior intensive level of coverage of 180 degree of the statistical property that meets image pixel distribution of content substantially, and angle prediction has reached extraordinary effect.And statistical result showed DC and PLANAR pattern account for the percentage of all use patterns up to 25%~30%, be two kinds of predictive modes that frequency of utilization is very high, improve this predictive mode has very great meaning concerning whole predictive coding.
Particularly, in step 3, consider that DC pattern algorithm is originally in image, comparatively flat site is designed, be that the mean value (mathematic expectaion) of computing reference pixel is as the pixel predictors of whole, in addition, in probability theory and mathematical statistics, variance is used for measuring the departure degree between stochastic variable and its mathematic expectaion.Therefore, this step selects variance as pixel situation of change in measurement DC piece with as the standard of classifying.
Particularly, in step 4, consider that PLANAR pattern algorithm is originally designed for gradation zone in image, the difference of computing reference pixel top left corner pixel and upper right corner pixel (top left corner pixel and lower left corner pixel) is as a gradient compensation of two tap interpolative predictions.By contrast, this step adopts neighbor to carry out gradient calculation, has reduced some predicated errors that pixel distance causes.Meanwhile, also select gradient as the brightness situation of change of pixel and the standard of classification in the PU piece of measurement PLANAR predictive mode.
Particularly, in step 5, use least square method to carry out data training.Least square method is a kind of Mathematics Optimization Method.It is by the optimum Match function of square searching data of minimum error.Utilize the least square method can be in the situation that the error sum of squares minimum of predicted value and original value, the optimum linearity relation between training place original pixels and reference pixel.Therefore by training, can obtain best weight estimation coefficient greatly reduces predicated error.
Advantage beneficial effect major embodiment of the present invention is in the following areas: by adopting methods and apparatus according to embodiments of the present invention, compared with prior art, more pay attention to the change profile of image texture, improved the precision of prediction of DC and PLANAR pattern, in image background details area, there is better subjective effect, meanwhile, the DC after improvement compares with the utilization rate of PLANAR predictive mode with original DC with PLANAR predictive mode, increases to some extent.Experimental results is as shown in following table 1 to 3, and wherein Δ R represents that code check changes percentage, and Δ PSNR represents that PSNR value changes.
Table 1: the increase of DC pattern and PLANAR pattern using rate while encoding after improving
Figure BDA0000417017820000141
Table 2: the test result of first group of QP
Figure BDA0000417017820000142
Table 3: the test result of second group of QP
Figure BDA0000417017820000143
In sum, those skilled in the art will appreciate that the above embodiment of the present invention can be made various modifications, modification and be replaced, it all falls into the protection scope of the present invention limiting as claims.

Claims (11)

1. the intra-frame prediction method based on mathematical statistics and classification based training in high-performance video coding, comprises the following steps:
Step 1, input video sequence data, wherein, described video sequence data is comprised of the brightness value of each pixel of frame of video, described frame of video is divided into a plurality of coding unit pieces, coding unit piece is by further for dividing predicting unit piece, and wherein, a plurality of neighbors in this frame of video of each predicting unit piece are as the reference pixel of this predicting unit piece;
Step 2, utilize the multiple standards intra prediction mode of high-performance video coding, respectively each predicting unit piece of described frame of video is carried out to infra-frame prediction, and, for each predicting unit piece, determine that respectively a predictive mode of rate distortion costs minimum in described multiple predictive mode is as optimum prediction mode;
Step 3, calculate optimum prediction mode and be the variance yields of reference pixel of each predicting unit piece of the first predictive mode in multiple standards intra prediction mode, the described variance yields of each predicting unit piece calculating is compared with predetermined variance threshold values, and, according to comparative result, the predicting unit piece that is the first predictive mode by described optimum prediction mode is divided into many group predicting unit pieces;
Step 4, every group of predicting unit piece in described many group predicting unit pieces, is used least square method, to equation
Figure FDA0000417017810000011
solve, obtain the weight estimation coefficient matrix of each location of pixels in each the predicting unit piece in this group predicting unit piece
Figure FDA0000417017810000012
its size is n * 1, wherein,
Figure FDA0000417017810000013
for m * n matrix that this brightness value of organizing all reference pixels of predicting unit piece forms, wherein n is the number of the reference pixel of each predicting unit piece, and m is the predicting unit piece number in this group predicting unit piece,
Figure FDA0000417017810000014
for the object pixel dot matrix that the brightness value of each pixel in each predicting unit piece forms, its size is m * 1, and i, j represent respectively the coordinate in the vertical and horizontal direction of this pixel in predicting unit piece.
2. intra-frame prediction method according to claim 1, further comprising the steps of:
Step 5, continues input video sequence data, and the video sequence data of new input is carried out to the operation in step 2 and 3, obtains many group predicting unit pieces;
Step 6, utilizes described weight estimation coefficient matrix
Figure FDA0000417017810000015
brightness value to the reference pixel of each the predicting unit piece in every group of predicting unit piece in many groups predicting unit piece is weighted addition, and the result of weighting summation is as the prediction brightness value of the pixel of the relevant position (i, j) in described each predicting unit piece.
3. intra-frame prediction method according to claim 1 and 2, wherein, described step 3 also comprises:
Step 3-1, calculate optimum prediction mode and be the second predictive mode in multiple standards intra prediction mode each predicting unit piece reference pixel in the horizontal direction with vertical direction on Grad, the described Grad of each predicting unit piece calculating is compared with predetermined Grads threshold, and, according to comparative result, the predicting unit piece that is the second predictive mode by described optimum prediction mode is divided into many group predicting unit pieces.
4. intra-frame prediction method according to claim 1, wherein, described multiple predictive mode comprises 33 kinds of angle predictive modes, DC predictive mode and PLANAR predictive mode.
5. intra-frame prediction method according to claim 4, wherein, described the first predictive mode is DC predictive mode.
6. intra-frame prediction method according to claim 3, wherein, described multiple predictive mode comprises 33 kinds of angle predictive modes, DC predictive mode and PLANAR predictive mode.
7. intra-frame prediction method according to claim 6, wherein, described the first predictive mode is DC predictive mode, described the second predictive mode is PLANAR predictive mode.
8. intra-frame prediction method according to claim 7, wherein, " the predicting unit piece that is the first predictive mode by described optimum prediction mode is divided into many group predicting unit pieces " in described step 3 comprising:
The predicting unit piece that is 0~1 by described variance yields is divided into one group, the predicting unit piece that is 1~2.5 by described variance yields is divided into one group, the predicting unit piece that is 2.5~4 by described variance yields is divided into one group, the predicting unit piece that is 4~10 by described variance yields is divided into one group, and the predicting unit piece that is 10~40 by described variance yields is divided into one group.
9. intra-frame prediction method according to claim 8, wherein, " the predicting unit piece that is the second predictive mode by described optimum prediction mode is divided into many group predicting unit pieces " in described step 3-1 comprising:
Grad in vertical direction and horizontal direction is to positive predicting unit piece and is divided into one group, Grad in vertical direction and horizontal direction is to negative predicting unit piece and is divided into one group, by the Grad in vertical direction for the Grad just, in horizontal direction be that negative predicting unit piece is divided into one group, Grad in vertical direction, for the Grad in negative, horizontal direction is that positive predicting unit piece is divided into one group, is divided into one group by all the other predicting unit pieces.
10. intra-frame prediction method according to claim 1, wherein, described step 2 is for size, to be 4 * 4 and 8 * 8 predicting unit piece carries out respectively to 4, and to obtain for size be the weight estimation coefficient matrix separately of 4 * 4 and 8 * 8 predicting unit piece
11. 1 kinds for carrying out the infra-frame prediction device of the intra-frame prediction method as described in of claim 2 to 10, comprises HEVC coding unit module, HEVC predicting unit module and sample training module,
Wherein, described HEVC coding unit module is carried out quad-tree partition to the maximum coding unit of frame of video, and maximum coding unit is divided into a plurality of coding unit pieces,
Described HEVC coding unit module is proceeded quad-tree partition to coding unit piece, and each coding unit piece is divided into a predicting unit piece in current division layer, and at next, divide in layer and be divided into a plurality of predicting unit pieces,
Described HEVC predicting unit module is passed through infra-frame prediction, quantification, conversion, pseudo-entropy encoding operation in predicting unit piece, obtain code check and rate distortion costs under the current predictive mode of current predicting unit piece, more current code check and rate distortion costs of dividing layer and next division layer, according to comparative result, judge and adopt current division layer or the predicting unit piece division of next division layer
Described sample training module comprises sample acquisition unit and sample training unit,
Described sample acquisition unit is used for: when carrying out predictive coding under DC and PLANAR predictive mode, from video sequence data, obtain the brightness value of each pixel in each predicting unit piece, the brightness value of the reference pixel of each predicting unit piece, and provide it to sample training unit
Described sample training unit is used for carrying out described step 3 to 6.
CN201310581162.4A 2013-11-19 2013-11-19 Intra-frame prediction method and device based on mathematical statistics and classification training in HEVC Pending CN103596004A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310581162.4A CN103596004A (en) 2013-11-19 2013-11-19 Intra-frame prediction method and device based on mathematical statistics and classification training in HEVC

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310581162.4A CN103596004A (en) 2013-11-19 2013-11-19 Intra-frame prediction method and device based on mathematical statistics and classification training in HEVC

Publications (1)

Publication Number Publication Date
CN103596004A true CN103596004A (en) 2014-02-19

Family

ID=50085963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310581162.4A Pending CN103596004A (en) 2013-11-19 2013-11-19 Intra-frame prediction method and device based on mathematical statistics and classification training in HEVC

Country Status (1)

Country Link
CN (1) CN103596004A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284186A (en) * 2014-09-24 2015-01-14 复旦大学 Fast algorithm suitable for HEVC standard intra-frame prediction mode judgment process
CN104363450A (en) * 2014-11-27 2015-02-18 北京奇艺世纪科技有限公司 Intra-frame coding mode decision-making method and device
CN105681812A (en) * 2016-03-30 2016-06-15 腾讯科技(深圳)有限公司 HEVC (high efficiency video coding) intra-frame coding processing method and device
WO2016154963A1 (en) * 2015-04-01 2016-10-06 Mediatek Inc. Methods for chroma coding in video codec
CN106060565A (en) * 2016-07-08 2016-10-26 合肥工业大学 Planar prediction circuit and method applied to video encoding and decoding
CN107071405A (en) * 2016-10-27 2017-08-18 浙江大华技术股份有限公司 A kind of method for video coding and device
CN108259897A (en) * 2018-01-23 2018-07-06 北京易智能科技有限公司 A kind of intraframe coding optimization method based on deep learning
CN109510995A (en) * 2018-10-26 2019-03-22 西安科锐盛创新科技有限公司 A kind of prediction technique based on video compress
CN109819250A (en) * 2019-01-15 2019-05-28 北京大学 A kind of transform method and system of the full combination of multicore
WO2020258052A1 (en) * 2019-06-25 2020-12-30 Oppo广东移动通信有限公司 Image component prediction method and device, and computer storage medium
WO2021027928A1 (en) * 2019-08-14 2021-02-18 Beijing Bytedance Network Technology Co., Ltd. Weighting factors for prediction sample filtering in intra mode
US11659202B2 (en) 2019-08-14 2023-05-23 Beijing Bytedance Network Technology Co., Ltd Position-dependent intra prediction sample filtering

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665079A (en) * 2012-05-08 2012-09-12 北方工业大学 Adaptive fast intra prediction mode decision for high efficiency video coding (HEVC)

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102665079A (en) * 2012-05-08 2012-09-12 北方工业大学 Adaptive fast intra prediction mode decision for high efficiency video coding (HEVC)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FATIH KAMISLI: "Intra prediction based on statistical modeling of images", 《VISUAL COMMUNICATIONS AND IMAGE PROCESSING》, 30 November 2012 (2012-11-30) *
GARY J.SULLIVAN: "Overview of the High Efficiency Video Coding (HEVC) Standard", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, vol. 22, no. 12, 30 December 2012 (2012-12-30), XP011487803, DOI: doi:10.1109/TCSVT.2012.2221191 *
赵文强,沈礼权,张兆杨: "HEVC帧内预测算法的优化", 《电视技术》, vol. 36, no. 8, 17 April 2012 (2012-04-17) *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284186A (en) * 2014-09-24 2015-01-14 复旦大学 Fast algorithm suitable for HEVC standard intra-frame prediction mode judgment process
CN104363450B (en) * 2014-11-27 2017-10-27 北京奇艺世纪科技有限公司 A kind of intra-frame encoding mode decision-making technique and device
CN104363450A (en) * 2014-11-27 2015-02-18 北京奇艺世纪科技有限公司 Intra-frame coding mode decision-making method and device
WO2016154963A1 (en) * 2015-04-01 2016-10-06 Mediatek Inc. Methods for chroma coding in video codec
CN105681812A (en) * 2016-03-30 2016-06-15 腾讯科技(深圳)有限公司 HEVC (high efficiency video coding) intra-frame coding processing method and device
CN105681812B (en) * 2016-03-30 2019-11-19 腾讯科技(深圳)有限公司 HEVC intraframe coding treating method and apparatus
CN106060565A (en) * 2016-07-08 2016-10-26 合肥工业大学 Planar prediction circuit and method applied to video encoding and decoding
CN106060565B (en) * 2016-07-08 2019-01-29 合肥工业大学 A kind of Planar prediction circuit and Planar prediction technique applied to coding and decoding video
CN107071405A (en) * 2016-10-27 2017-08-18 浙江大华技术股份有限公司 A kind of method for video coding and device
CN107071405B (en) * 2016-10-27 2019-09-17 浙江大华技术股份有限公司 A kind of method for video coding and device
CN108259897B (en) * 2018-01-23 2021-08-27 北京易智能科技有限公司 Intra-frame coding optimization method based on deep learning
CN108259897A (en) * 2018-01-23 2018-07-06 北京易智能科技有限公司 A kind of intraframe coding optimization method based on deep learning
CN109510995A (en) * 2018-10-26 2019-03-22 西安科锐盛创新科技有限公司 A kind of prediction technique based on video compress
CN109510995B (en) * 2018-10-26 2020-12-18 宁波镇海昕龙网络科技有限公司 Prediction method based on video compression
CN109819250A (en) * 2019-01-15 2019-05-28 北京大学 A kind of transform method and system of the full combination of multicore
WO2020258052A1 (en) * 2019-06-25 2020-12-30 Oppo广东移动通信有限公司 Image component prediction method and device, and computer storage medium
WO2021027928A1 (en) * 2019-08-14 2021-02-18 Beijing Bytedance Network Technology Co., Ltd. Weighting factors for prediction sample filtering in intra mode
EP3997872A4 (en) * 2019-08-14 2022-10-26 Beijing Bytedance Network Technology Co., Ltd. Weighting factors for prediction sample filtering in intra mode
US11533477B2 (en) 2019-08-14 2022-12-20 Beijing Bytedance Network Technology Co., Ltd. Weighting factors for prediction sample filtering in intra mode
US11659202B2 (en) 2019-08-14 2023-05-23 Beijing Bytedance Network Technology Co., Ltd Position-dependent intra prediction sample filtering

Similar Documents

Publication Publication Date Title
CN103596004A (en) Intra-frame prediction method and device based on mathematical statistics and classification training in HEVC
CN102484719B (en) Method and apparatus for encoding video, and method and apparatus for decoding video
USRE47254E1 (en) Method and apparatus for encoding video, and method and apparatus for decoding video
CN104780364B (en) Determine the intra prediction mode of image coding unit and image decoding unit
CN104811714B (en) Use the enhancing intraframe predictive coding of plane expression
CN104301724B (en) Method for processing video frequency, encoding device and decoding device
CN102474599B (en) Method and apparatus for encoding images, and method and apparatus for decoding encoded images
US9313526B2 (en) Data compression for video
CN106067977B (en) For the device encoded to image
CN104935941B (en) The method being decoded to intra prediction mode
CN104796694B (en) Optimization intraframe video coding method based on video texture information
CN103248895B (en) A kind of quick mode method of estimation for HEVC intraframe coding
MX2012011646A (en) Method and apparatus for performing interpolation based on transform and inverse transform.
CN103067704B (en) A kind of method for video coding of skipping in advance based on coding unit level and system
CN103765901A (en) Method and apparatus for image encoding and decoding using intra prediction
CN101710993A (en) Block-based self-adaptive super-resolution video processing method and system
CN102187668B (en) Method and device for encoding image or image sequence and decoding method and device
CN102209243A (en) Depth map intra prediction method based on linear model
CN102415097A (en) Distortion weighing
CN110366850A (en) Method and apparatus for the method based on intra prediction mode processing image
CN107864380A (en) 3D HEVC fast intra-mode prediction decision-making techniques based on DCT
CN109874012A (en) A kind of method for video coding, encoder, electronic equipment and medium
CN1194544C (en) Video encoding method based on prediction time and space domain conerent movement vectors
CN103139563A (en) Method for coding and reconstructing a pixel block and corresponding devices
CN101854534B (en) Fast interframe mode selection method in H. 264

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20170510

AD01 Patent right deemed abandoned