CN104244006B - A kind of video coding-decoding method and device based on image super-resolution - Google Patents

A kind of video coding-decoding method and device based on image super-resolution Download PDF

Info

Publication number
CN104244006B
CN104244006B CN201410230514.6A CN201410230514A CN104244006B CN 104244006 B CN104244006 B CN 104244006B CN 201410230514 A CN201410230514 A CN 201410230514A CN 104244006 B CN104244006 B CN 104244006B
Authority
CN
China
Prior art keywords
image
block
resolution
dictionary
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410230514.6A
Other languages
Chinese (zh)
Other versions
CN104244006A (en
Inventor
王荣刚
赵洋
王振宇
高文
王文敏
董胜富
黄铁军
马思伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Shenzhen Graduate School
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to CN201410230514.6A priority Critical patent/CN104244006B/en
Publication of CN104244006A publication Critical patent/CN104244006A/en
Application granted granted Critical
Publication of CN104244006B publication Critical patent/CN104244006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Video coding-decoding method and device provided by the present application based on image super-resolution, the application method is before predicting video image to be encoded and to be encoded, super-resolution interpolation processing first is carried out to video image, detailed information recovery can be amplified and carried out to image, to, when being predicted to obtain prediction block to image to be encoded/to be decoded, the method that video image is predicted using linear interpolation compared with prior art, it more can effectively restore original image, the problem of avoiding the occurrence of prediction block edge blurry in the prior art, to promote the accuracy of video image prediction, and then promote the code efficiency of video image.

Description

A kind of video coding-decoding method and device based on image super-resolution
Technical field
The present invention relates to image super-resolution technical fields, and in particular to a kind of video volume solution based on image super-resolution Code method and device.
Background technique
Traditional coding method is to be compressed using the information redundancy of image to be encoded and video itself to video image Processing, with being constantly progressive for coding techniques, the redundancy of Video coding is constantly reduced, image to be encoded and video itself Time-space domain correlation is fully utilized.By the information other than image to be encoded and video, coded image and video information are treated It is predicted, is to increase substantially image and video compression efficiency to reduce the information content of image to be encoded and video itself New direction.
In the prior art, in order to improve video image interframe forecasting efficiency, sub-pixel motion compensation technology is generally used. In order to obtain a point Pixel Information, the method for linear interpolation is generallyd use at present.The advantages of linear interpolation method is simple, but disadvantage It is the detail of the high frequency for being difficult to restore high-definition picture, causes to obscure simultaneously for marginal portion, to constrains a point picture The efficiency of plain motion compensation.
Summary of the invention
Method for video coding provided in an embodiment of the present invention based on image super-resolution, comprising: utilize the line of pre-training Manage dictionary library and super-resolution interpolation processing carried out to video image, obtain reference picture, the texture dictionary library include: one group or Multiple groups dictionary base, the dictionary base include: training image high-definition picture block and with the high-definition picture block pair The mapping group that the low-resolution image block answered is combined into, the super-resolution interpolation processing include: to carry out image amplification and image Detailed information restore;Estimation is carried out on the reference picture for each image block for treating coded image and movement is mended It repays, obtains prediction block corresponding with each image block of the video image to be encoded;By the figure of the video image to be encoded As block subtracts each other with the corresponding prediction block, prediction residue block is obtained;Coded treatment is carried out to the prediction residue block.
Video encoding/decoding method provided in an embodiment of the present invention based on image super-resolution, comprising: the image of acquisition is compiled Signal bit stream is decoded to obtain prediction residue block;Super-resolution is carried out to video image using the texture dictionary library of pre-training to insert Value processing, obtains reference picture, the texture dictionary library includes: one or more groups of dictionary bases, the dictionary base are as follows: training image High-definition picture block and the mapping group that is combined into of low-resolution image block corresponding with the high-definition picture block, institute Stating super-resolution interpolation processing includes: to carry out the detailed information recovery of image amplification and image;It treats on decoding video images Each image block carries out motion compensation on the reference picture, obtains prediction block corresponding with each image block;By institute Prediction block is stated to be added to obtain decoded video image with the prediction residue block.
Video coding apparatus provided in an embodiment of the present invention based on image super-resolution, comprising: at super-resolution interpolation Unit is managed, super-resolution interpolation processing is carried out to video image for the texture dictionary library using pre-training, obtains reference picture, The texture dictionary library includes: one or more groups of dictionary bases, the dictionary base include: the high-definition picture block of training image with And the mapping group that low-resolution image block corresponding with the high-definition picture block is combined into, the super-resolution interpolation processing It include: the detailed information recovery for carrying out image amplification and image;Predicting unit, for treating each image block on coded image Motion estimation and compensation is carried out on a reference, is obtained corresponding with each image block of the video image to be encoded Prediction block;Subtraction unit, for estimating each image block of the video image to be encoded and the motion estimation unit Obtained corresponding prediction block is subtracted each other, and prediction residue block is obtained;Coding unit, for the subtraction unit to be calculated The prediction residue block carry out coded treatment.
Video decoder provided in an embodiment of the present invention based on image super-resolution, comprising: decoding unit, for pair The image encoding stream signal of acquisition is decoded to obtain prediction residue block;Super-resolution interpolation process unit, for utilizing pre- instruction Experienced texture dictionary library carries out super-resolution interpolation processing to video image, obtains reference picture, the texture dictionary library includes: One or more groups of dictionary bases, the dictionary base include: training image high-definition picture block and with the high resolution graphics As the corresponding low-resolution image block of block, the super-resolution interpolation processing includes: the details letter for carrying out image amplification and image Breath restores;Predicting unit, each image block for treating on decoding video images carry out movement benefit on the reference picture It repays, obtains prediction block corresponding with each image block;Additional calculation unit, for obtaining the motion compensation process unit To the prediction block be added to obtain decoded video image with the prediction residue block that the decoding unit acquires.
As can be seen from the above technical solutions, the embodiment of the present invention has the advantage that
Video coding-decoding method and device provided by the present application based on image super-resolution, the application method are treating volume Code and before video image to be encoded predicted, first carries out super-resolution interpolation processing to video image, can to image into Row amplification and progress detailed information recovery, thus, when being predicted to obtain prediction block to image to be encoded/to be decoded, compare The method that the prior art predicts video image using linear interpolation more can effectively restore original image, avoid the occurrence of existing In technology the problem of prediction block edge blurry, to promote the accuracy of video image prediction, and then the volume of video image is promoted Code efficiency.
Detailed description of the invention
Above-mentioned and/or additional aspect and advantage of the invention is from combining in description of the following accompanying drawings to embodiment by change It obtains obviously and is readily appreciated that, in which:
Fig. 1 is the flow chart of the method for video coding based on image super-resolution of embodiment one;
Fig. 2 a-2c is the feature extraction schematic diagram of image block local grain structure in a kind of embodiment of the application;
Fig. 3 is a kind of flow chart of embodiment of the step 101 of embodiment two;
Fig. 4 is the video encoding/decoding method flow chart based on image super-resolution of the present embodiment three;
Fig. 5 be embodiment three step 202 a kind of embodiment flow chart;
Fig. 6 is the apparatus structure schematic diagram of the embodiment of the present application five;
Fig. 7 is the structural schematic diagram of five super-resolution interpolation process unit of the embodiment of the present application;
Fig. 8 is the apparatus structure schematic diagram of the embodiment of the present application six;
Fig. 9 is the structural schematic diagram of the super-resolution interpolation process unit of the embodiment of the present application six.
Specific embodiment
In the embodiment of the present application, a kind of video coding-decoding method and device based on image super-resolution is provided, it can be with The high-frequency information for restoring image, improves the quality of image, to be applied to promote the standard of prediction to the time domain prediction of video image True property, and then improve encoding-decoding efficiency.
The application is described in further detail below by specific embodiment combination attached drawing.
Embodiment one:
Referring to FIG. 1, Fig. 1 is a kind of flow chart of the method for video coding based on image super-resolution in embodiment.Such as Shown in Fig. 1, the present embodiment provides a kind of method for video coding based on image super-resolution, may comprise steps of:
101, super-resolution interpolation processing is carried out to video image using the texture dictionary library of pre-training.
After super-resolution interpolation processing, reference picture is obtained.The texture dictionary library includes: one or more groups of dictionaries Base, the dictionary base are as follows: the high-definition picture block of training image and low resolution corresponding with the high-definition picture block The mapping group of rate image block combination, the super-resolution interpolation processing include: carry out image amplification and image detailed information it is extensive It is multiple.
102, each image block on video image to be encoded carries out Motion estimation and compensation on a reference, obtains To the prediction block corresponding with each image block.
Wherein, described image block can be divided on the video images according to preset division rule, such as: by 2 × 2 A pixel is divided into an image block.The application is only for example to division rule to be not especially limited.
In the present embodiment step, ready-portioned each image block on encoded video image can be treated and carried out in reference picture Middle carry out Motion estimation and compensation calculates positional shift and respective pixel value of each image block in reference frame, thus Obtain the prediction block corresponding with each image block of video image to be encoded after estimation.
103, the image block of the video image to be encoded is subtracted each other with corresponding prediction block, obtains prediction residue block.
104, coded treatment is carried out to prediction residue block.
The method for video coding based on image super-resolution that the embodiment of the present application one provides, utilizes the texture word of pre-training Allusion quotation library carries out super-resolution interpolation processing to video image, can amplify to image and carry out detailed information recovery, then is right Reference image block after carrying out super-resolution interpolation processing carries out estimation, corresponding prediction block is obtained, then by prediction block Subtract each other to obtain residual block with video image to be encoded, then residual block is encoded.Linear interpolation pair is utilized compared with prior art The method that video image is predicted, the application method is before predicting video image to be encoded, first to video image Super-resolution interpolation processing is carried out, image can be amplified and carry out detailed information recovery, in this way, treating coded image The problem of motion estimation process is carried out when obtaining prediction block, avoids the occurrence of prediction block edge blurry in the prior art, to be promoted The accuracy of prediction, and then improve code efficiency.
In one preferred embodiment, in texture dictionary library each dictionary base according to each training image high-definition picture block Local feature and the local feature of low-resolution image block corresponding with the high-definition picture block classify, it is described Local feature includes local binary structure (LBS, Local Binary Structure) and sharpened edge structure (SES, Sharp Edge Structure)。
In the present embodiment, texture dictionary is that preparatory training obtains, and the pre-training of texture dictionary can take following implementation Mode:
S1, it is concentrated from the training image comprising several training images and chooses multiple high-resolution topographies block, wherein High-resolution topography block by its at least two pixels on the image form.Down-sampled place is carried out to training image Reason obtains and the one-to-one low resolution topography block of each topography's block.
S2, the local feature for extracting high-resolution topography block, obtain high-resolution dictionary sample Dh (y), and, it mentions The local feature with each one-to-one low resolution topography block of topography's block is taken, low-resolution dictionary is obtained The high-resolution dictionary sample is mutually mapped with the low-resolution dictionary sample and combines to obtain one group of word by sample Dl (y) Allusion quotation basic pattern sheet, the local feature include LBS and SES.
S3, the multiple groups dictionary basic pattern is originally trained, obtains texture dictionary library.
It is slotting to super-resolution is carried out to video image using the texture dictionary library of pre-training in the embodiment of the present application one below The process and principle of value processing are illustrated.
As shown in Fig. 2 a, 2b and 2c, A, B, C, D are adjacent four pixels in part, in figure, pixel it is highly reactive The gray value size of pixel.As shown in Figure 2 a, tetra- pixels of A, B, C, D form one piece of flat regional area, therefore ash Angle value is equal in magnitude.As shown in Figure 2 b, the gray value of pixel A and B is higher than the gray value of pixel C and D.The present embodiment definition LBS-Geometry (LBS_G) distinguishes the difference on this geometry, and the calculation of LBS-Geometry (LBS_G) is such as public Formula (1):
Wherein, gpIndicate the gray value of p-th of pixel of part, gmeanIt is the part that tetra- pixels of A, B, C, D are constituted Pixel mean value.It illustrates by taking 4 pixels as an example in the present embodiment, in other embodiments, the quantity of pixel can be with For other numerical value, for example, it is N number of, and N is positive integer.
Topography's block as shown in Fig. 2 b, 2c, since gray difference degree is different, the two still falls within different parts Mode, therefore the present embodiment defines LBS-Difference (LBS_D) to indicate local gray level difference degree, available formula (2):
Wherein dglobalIt is the mean value of local gray level difference whole in entire image.
Complete local binary structure description is just constituted in conjunction with LBS_G and LBS_D, as shown in formula (3):
Meanwhile sharpened edge structure SES is defined in the present embodiment:
Wherein, t is to be previously provided with gray threshold, and in a particular embodiment, t is set as a relatively large threshold value, is used In differentiation sharpened edge.
In the present embodiment, the training of texture dictionary can obtain owing complete dictionary by the way of K mean cluster, alternatively, Complete dictionary can be obtained by the way of sparse coding in the training of texture dictionary.
Using K mean cluster mode training dictionary when, choose certain amount (such as 100,000) from feature samples Sample clusters out several class centers using K mean cluster algorithm, uses the set of these class centers as texture dictionary Library.Using the mode training dictionary of K mean cluster can establish dimension it is low owe complete dictionary library.
In one preferred embodiment, when carrying out super-resolution interpolation to video image, unknown high-resolution on image Rate topography block x can be expressed as the combination of multiple dictionary bases in texture dictionary library:
X≈Dh(y)α…………(5)
Wherein y is low resolution corresponding with high-resolution topography block x topography block, Dh (y) be have with y it is identical The high-resolution dictionary sample of the dictionary base of LBS and SES, α are expression coefficients.
When using complete dictionary, factor alpha meets sparsity, is calculated using low-resolution dictionary sample Dl (y) dilute Expression factor alpha is dredged, then the expression factor alpha being calculated is substituted into formula (5) and calculates corresponding high-resolution topography Block x, therefore the acquisition of optimal α can be converted into following optimization problem:
Wherein ε is the minimum for tending to 0, and F is that Feature Descriptor is taken to operate, and in dictionary D provided in this embodiment, is taken It is characterized in local gray level difference combination gradient value size.Since α is sparse enough, the L0 of formula (6) is replaced using L1 norm Norm, optimization problem become:
Wherein, λ is the coefficient for adjusting sparsity and similitude, and optimal sparse expression coefficient α can be by solution The acquisition of Lasso problem is stated, the corresponding high-resolution topography block x of y can be calculated by then substituting into formula (5).
When complete dictionary is owed in use, α is unsatisfactory for enough sparsities, finds out the k word closest to y using k nearest neighbor algorithm Allusion quotation base Dl (y), then x is rebuild with the linear combination of k high-resolution dictionary Dh (y) corresponding with Dl (y).
The low resolution localized mass y of each distortion in image rebuilds it clearly after high-definition picture block x, just obtains Final clearly goes back original image.
Embodiment two:
Referring to FIG. 3, Fig. 3 is a kind of flow chart of embodiment of step 101 in embodiment one.In the present embodiment, institute State each dictionary base in texture dictionary according to the local feature of the high-definition picture block of each training image and with the high-resolution The local feature of the corresponding low-resolution image block of rate image block is classified, the local feature include local binary structure and Sharpened edge structure.
The texture dictionary library using pre-training of method for video coding provided in this embodiment based on image super-resolution Super-resolution interpolation processing is carried out to video image, can specifically include following steps:
101a, the local feature for extracting each image block on video image.
101b, by each dictionary base in the local feature of image block each in the video image and the texture dictionary library Local feature carry out it is right, obtain with pair dictionary base.
101c, image detail information is carried out to image block corresponding on the video image using described pair of dictionary base Recovery and image enhanced processing.
Embodiment three:
Referring to FIG. 4, Fig. 4 is the video encoding/decoding method flow chart based on image super-resolution in a kind of embodiment.Such as Fig. 4 Shown, the video encoding/decoding method provided in this embodiment based on image super-resolution may comprise steps of:
201, the image encoding stream signal of acquisition is decoded to obtain prediction residue block.
202, super-resolution interpolation processing is carried out to video image using the texture dictionary library of pre-training, obtained with reference to figure Picture, the texture dictionary library include: one or more groups of dictionary bases, and the dictionary base includes: the high-definition picture of training image The mapping group that block and low-resolution image block corresponding with the high-definition picture block are combined into, the super-resolution interpolation Processing includes: to carry out the detailed information recovery of image amplification and image.
203, motion compensation is carried out on the reference picture to each image block on the video image to be decoded, obtained To prediction block.
204, prediction block is added to obtain decoded video image with prediction residue block.
The video encoding/decoding method based on image super-resolution that the embodiment of the present application three provides, to the image encoding stream of acquisition Signal is decoded to obtain prediction residue block, is carried out at super-resolution interpolation using the texture dictionary library of pre-training to video image Reason, super-resolution interpolation processing includes: to carry out the detailed information recovery of image amplification and image, to the view after progress interpolation processing Frequency image carries out motion compensation process, obtains prediction block, prediction block is added to obtain video image to be decoded with prediction residue block. The method predicted compared with prior art using linear interpolation video image, the application method are treating decoded video figure As first carrying out super-resolution interpolation processing to video image, details letter can be amplified and carried out to image before being predicted Breath restores, in this way, being not in predict in the prior art when obtaining prediction block to image to be decoded progress motion compensation process The fuzzy problem of block edge to promote the accuracy of prediction, and then improves decoding efficiency.
Example IV:
Referring to FIG. 5, Fig. 5 is a kind of flow chart of embodiment of step 202 in embodiment three.In the present embodiment, institute State each dictionary base in texture dictionary according to the local feature of the high-definition picture block of each training image and with the high-resolution The local feature of the corresponding low-resolution image block of rate image block is classified, the local feature include local binary structure and Sharpened edge structure.
Video image progress super-resolution interpolation processing can specifically include using the texture dictionary library of pre-training following Step:
202a, the local feature for extracting each image block on video image.
202b, by the local feature of each dictionary base in the local feature of each image block and the texture dictionary library into Row is right, the dictionary base of acquisition pair.
202c, using pair dictionary base detailed information recovery and image enhanced processing are carried out to video image.
Embodiment five:
Referring to FIG. 6, the present embodiment accordingly provides a kind of video coding apparatus based on image super-resolution, can wrap It includes:
Super-resolution interpolation process unit 60 carries out super-resolution to video image for the texture dictionary library using pre-training Rate interpolation processing, the texture dictionary library include: one or more groups of dictionary bases, and the dictionary base includes: the high score of training image The mapping group that resolution image block and low-resolution image block corresponding with the high-definition picture block are combined into, the oversubscription Resolution interpolation processing includes: to carry out the detailed information recovery of image amplification and image.
Predicting unit 61 is passing through super-resolution interpolation process unit 60 for treating each image block on coded image Motion estimation and compensation is carried out on reference picture after carrying out super-resolution interpolation processing, is obtained and video image to be encoded The corresponding prediction block of each image block.
Subtraction unit 62, for estimating each image block of the video image to be encoded with motion estimation unit 61 Obtained corresponding prediction block is subtracted each other, and prediction residue block is obtained.
Coding unit 63, the prediction residue block for subtraction unit 62 to be calculated carry out coded treatment.
In one preferred embodiment, referring to Fig. 6, in the texture dictionary each dictionary base according to each training image height The local feature of the local feature of image in different resolution block and low-resolution image block corresponding with the high-definition picture block Classify, the local feature includes local binary structure and sharpened edge structure.
Referring to Fig. 7, Fig. 7 is the structural schematic diagram of five super-resolution interpolation process unit of the embodiment of the present application, such as Fig. 7 institute Show, super-resolution interpolation process unit 60 can specifically include:
Extraction module 601, for extracting the local feature of each image block on video image.
To module 602, the local feature of each image block in the video image for extracting extraction module 601 Right, a dictionary base for acquisition pair is carried out with the local feature of dictionary base each in the texture dictionary library.
Image processing module 603, for using to module 602 to described in going out pair of a dictionary base to the video Corresponding image block carries out image detail information recovery and image enhanced processing on image.
Embodiment six:
Referring to FIG. 8, may include: the present embodiment provides a kind of video decoder based on image super-resolution
Decoding unit 70, for being decoded to obtain prediction residue block to the image encoding stream signal of acquisition.
Super-resolution interpolation process unit 71 carries out super-resolution to video image for the texture dictionary library using pre-training Rate interpolation processing obtains reference picture, and the texture dictionary library includes: one or more groups of dictionary bases, and the dictionary base includes: instruction What the high-definition picture block and low-resolution image block corresponding with the high-definition picture block for practicing image were combined into reflects Group is penetrated, the super-resolution interpolation processing includes: to carry out the detailed information recovery of image amplification and image.
Predicting unit 72 is passing through super-resolution interpolation processing list for treating each image block on decoding video images Motion compensation is carried out on reference picture after 71 progress interpolation processing of member, obtains the corresponding prediction block of each image block.
Additional calculation unit 73, the prediction block and the decoding unit for obtaining motion compensation process unit 72 The prediction residue block acquired be added to obtain it is to be decoded after video image.
In one preferred embodiment, referring to Fig. 9, Fig. 9 is the structural schematic diagram of super-resolution interpolation process unit.Institute State each dictionary base in texture dictionary according to the local feature of the high-definition picture block of each training image and with the high-resolution The local feature of the corresponding low-resolution image block of rate image block is classified, the local feature include local binary structure and Sharpened edge structure.
Super-resolution interpolation process unit 71 includes:
Extraction module 710, for extracting the local feature of the video image.
To module 711, in the video image for extracting extraction module 710 local feature of each image block with The local feature of each dictionary base carries out right, a dictionary base for acquisition pair in the texture dictionary library.
Image processing module 712 carries out video image using the dictionary base to module 711 to described in going out pair Detailed information restores and image enhanced processing.
It will be understood by those skilled in the art that all or part of the steps of various methods can pass through in above embodiment Program instructs related hardware to complete, which can be stored in a computer readable storage medium, storage medium can wrap It includes: read-only memory, random access memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, it should be understood that these embodiments only to explain the present invention, and It is not used in the restriction present invention.It, according to the thought of the present invention, can be to above-mentioned specific implementation for those of ordinary skill in the art Mode is changed.

Claims (8)

1. a kind of method for video coding based on image super-resolution characterized by comprising
Super-resolution interpolation processing is carried out to video image using the texture dictionary library of pre-training, obtains reference picture, the line Reason dictionary library includes: one or more groups of dictionary bases, the dictionary base include: training image high-definition picture block and with institute The mapping group that the corresponding low-resolution image block of high-definition picture block is combined into is stated, the super-resolution interpolation processing includes: The detailed information for carrying out image amplification and image is restored;
The each image block treated on encoded video image carries out Motion estimation and compensation on the reference picture, obtains Prediction block corresponding with each image block of the video image to be encoded;
The image block of the video image to be encoded is subtracted each other with the corresponding prediction block, obtains prediction residue block;
Coded treatment is carried out to the prediction residue block;
Wherein, in the texture dictionary library each dictionary base according to the high-definition picture block of each training image local feature and The local feature of low-resolution image block corresponding with the high-definition picture block is classified, and the local feature includes office Portion two-value structure LBS and sharpened edge structure SES;
The local binary structure is described by following formula:
gpIndicate the gray value of p-th of pixel of part, gmeanIt is the pixel mean value for the part that 4 pixels are constituted, dglobal It is the mean value of local gray level difference whole in entire image;
The sharpened edge structure SES is described by following formula:
T is pre-set gray threshold;In above-mentioned two formula,dp=| gp-gmean|。
2. as described in claim 1 based on the method for video coding of image super-resolution, which is characterized in that described to utilize pre- instruction Experienced texture dictionary library carries out super-resolution interpolation processing to video image
Extract the local feature of each image block on video image;
The part of each dictionary base in the local feature of image block each in the video image and the texture dictionary library is special Sign carries out right, a dictionary base for acquisition pair;
Image detail information recovery and figure are carried out to image block corresponding on the video image using described pair of dictionary base As enhanced processing.
3. a kind of video encoding/decoding method based on image super-resolution characterized by comprising
The image encoding stream signal of acquisition is decoded to obtain prediction residue block;
Super-resolution interpolation processing is carried out to video image using the texture dictionary library of pre-training, obtains reference picture, the line Manage dictionary library include: one or more groups of dictionary bases, the dictionary base are as follows: the high-definition picture block of training image and with it is described The mapping group that the corresponding low-resolution image block of high-definition picture block is combined into, the super-resolution interpolation processing include: into The detailed information of the amplification of row image and image is restored;
The each image block treated on decoding video images carries out motion compensation on the reference picture, obtain with it is described each The corresponding prediction block of image block;
The prediction block is added to obtain decoded video image with the prediction residue block;
Wherein, in the texture dictionary library each dictionary base according to the high-definition picture block of each training image local feature and The local feature of low-resolution image block corresponding with the high-definition picture block is classified, and the local feature includes office Portion two-value structure LBS and sharpened edge structure SES;
The local binary structure is described by following formula:
gpIndicate the gray value of p-th of pixel of part, gmeanIt is the pixel mean value for the part that 4 pixels are constituted, dglobal It is the mean value of local gray level difference whole in entire image;
The sharpened edge structure SES is described by following formula:
T is pre-set gray threshold;In above-mentioned two formula,dp=| gp-gmean|。
4. as claimed in claim 3 based on the video encoding/decoding method of image super-resolution, which is characterized in that described to utilize pre- instruction Experienced texture dictionary library carries out super-resolution interpolation processing to video image
Extract the local feature of each image block of the video image;
By the part of each dictionary base in the local feature of image block each in the video image and the texture dictionary library Feature carries out right, a dictionary base for acquisition pair;
Detailed information recovery and image enhanced processing are carried out to the video image using described pair of dictionary base.
5. a kind of video coding apparatus based on image super-resolution characterized by comprising
Super-resolution interpolation process unit carries out super-resolution interpolation to video image for the texture dictionary library using pre-training Processing, obtains reference picture, the texture dictionary library includes: one or more groups of dictionary bases, and the dictionary base includes: training image High-definition picture block and the mapping group that is combined into of low-resolution image block corresponding with the high-definition picture block, institute Stating super-resolution interpolation processing includes: to carry out the detailed information recovery of image amplification and image;
Predicting unit, each image block for treating on encoded video image carry out estimation and movement on a reference Compensation, obtains prediction block corresponding with each image block of the video image to be encoded;
Subtraction unit, for estimating to obtain each image block of the video image to be encoded with the motion estimation unit Corresponding prediction block subtract each other, obtain prediction residue block;
Coding unit, the prediction residue block for the subtraction unit to be calculated carry out coded treatment;
In the texture dictionary library each dictionary base according to the high-definition picture block of each training image local feature and with institute The local feature for stating the corresponding low-resolution image block of high-definition picture block is classified, and the local feature includes part two It is worth structure LBS and sharpened edge structure SES;
The local binary structure is described by following formula:
gpIndicate the gray value of p-th of pixel of part, gmeanIt is the pixel mean value for the part that 4 pixels are constituted, dglobal It is the mean value of local gray level difference whole in entire image;
The sharpened edge structure SES is described by following formula:
T is pre-set gray threshold;In above-mentioned two formula,dp=| gp-gmean|。
6. as claimed in claim 5 based on the video coding apparatus of image super-resolution, which is characterized in that
The super-resolution interpolation process unit specifically includes:
Extraction module, for extracting the local feature of each image block on video image;
To module, in the video image for extracting the extraction module local feature of each image block with it is described The local feature of each dictionary base carries out right, a dictionary base for acquisition pair in texture dictionary library;
Image processing module, for using described to module to described in going out pair of a dictionary base to the video image Corresponding image block carries out image detail information recovery and image enhanced processing.
7. a kind of video decoder based on image super-resolution characterized by comprising
Decoding unit, for being decoded to obtain prediction residue block to the image encoding stream signal of acquisition;
Super-resolution interpolation process unit carries out super-resolution interpolation to video image for the texture dictionary library using pre-training Processing, obtains reference picture, the texture dictionary library includes: one or more groups of dictionary bases, the dictionary base are as follows: training image The mapping group that high-definition picture block and low-resolution image block corresponding with the high-definition picture block are combined into, it is described Super-resolution interpolation processing includes: to carry out the detailed information recovery of image amplification and image;
Predicting unit, each image block for treating on decoding video images carry out motion compensation on the reference picture, Obtain prediction block corresponding with each image block;
Additional calculation unit, the prediction block and the decoding unit for obtaining the motion compensation process unit obtain The obtained prediction residue block is added to obtain decoded video image;
In the texture dictionary library each dictionary base according to the high-definition picture block of each training image local feature and with institute The local feature for stating the corresponding low-resolution image block of high-definition picture block is classified, and the local feature includes part two It is worth structure LBS and sharpened edge structure SES;
The local binary structure is described by following formula:
gpIndicate the gray value of p-th of pixel of part, gmeanIt is the pixel mean value for the part that 4 pixels are constituted, dglobal It is the mean value of local gray level difference whole in entire image;
The sharpened edge structure SES is described by following formula:
T is pre-set gray threshold;In above-mentioned two formula, dp=| gp-gmean|。
8. as claimed in claim 7 based on the video decoder of image super-resolution, which is characterized in that
The super-resolution interpolation process unit includes:
Extraction module, for extracting the local feature of the video image;
To module, the local feature of each image block and institute in the video image for extracting the extraction module The local feature for stating each dictionary base in texture dictionary library carries out right, a dictionary base for acquisition pair;
Image processing module carries out module to described in going out pair of a dictionary base using described thin to the video image Save Information recovering and image enhanced processing.
CN201410230514.6A 2014-05-28 2014-05-28 A kind of video coding-decoding method and device based on image super-resolution Active CN104244006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410230514.6A CN104244006B (en) 2014-05-28 2014-05-28 A kind of video coding-decoding method and device based on image super-resolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410230514.6A CN104244006B (en) 2014-05-28 2014-05-28 A kind of video coding-decoding method and device based on image super-resolution

Publications (2)

Publication Number Publication Date
CN104244006A CN104244006A (en) 2014-12-24
CN104244006B true CN104244006B (en) 2019-02-26

Family

ID=52231221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410230514.6A Active CN104244006B (en) 2014-05-28 2014-05-28 A kind of video coding-decoding method and device based on image super-resolution

Country Status (1)

Country Link
CN (1) CN104244006B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740505B (en) * 2018-12-29 2021-06-18 成都视观天下科技有限公司 Training data generation method and device and computer equipment
CN110111251B (en) * 2019-04-22 2023-04-28 电子科技大学 Image super-resolution reconstruction method combining depth supervision self-coding and perception iterative back projection
CN110381321B (en) * 2019-08-23 2021-08-31 西安邮电大学 Interpolation calculation parallel implementation method for motion compensation
CN112218072B (en) * 2020-10-10 2023-04-07 南京大学 Video coding method based on deconstruction compression and fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102986220A (en) * 2010-07-20 2013-03-20 西门子公司 Video coding with reference frames of high resolution
CN103077511A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure similarity
CN103297784A (en) * 2008-10-31 2013-09-11 Sk电信有限公司 Apparatus for encoding image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100413316C (en) * 2006-02-14 2008-08-20 华为技术有限公司 Ultra-resolution ratio reconstructing method for video-image
JP4646146B2 (en) * 2006-11-30 2011-03-09 ソニー株式会社 Image processing apparatus, image processing method, and program
KR101675116B1 (en) * 2009-08-06 2016-11-10 삼성전자 주식회사 Method and apparatus for encoding video, and method and apparatus for decoding video
KR101457894B1 (en) * 2009-10-28 2014-11-05 삼성전자주식회사 Method and apparatus for encoding image, and method and apparatus for decoding image
JP2012142865A (en) * 2011-01-05 2012-07-26 Sony Corp Image processing apparatus and image processing method
US9324133B2 (en) * 2012-01-04 2016-04-26 Sharp Laboratories Of America, Inc. Image content enhancement using a dictionary technique

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103297784A (en) * 2008-10-31 2013-09-11 Sk电信有限公司 Apparatus for encoding image
CN102986220A (en) * 2010-07-20 2013-03-20 西门子公司 Video coding with reference frames of high resolution
CN103077511A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure similarity

Also Published As

Publication number Publication date
CN104244006A (en) 2014-12-24

Similar Documents

Publication Publication Date Title
Sachnev et al. Reversible watermarking algorithm using sorting and prediction
US9986255B2 (en) Method and device for video encoding or decoding based on image super-resolution
CN104244006B (en) A kind of video coding-decoding method and device based on image super-resolution
Chang et al. Reversible steganographic method using SMVQ approach based on declustering
CN105741224B (en) The reversible water mark algorithm divided based on PVO and adaptive block
Lee et al. An adaptive data hiding scheme with high embedding capacity and visual image quality based on SMVQ prediction through classification codebooks
Jamil et al. Learning-driven lossy image compression: A comprehensive survey
KR20120118465A (en) Data pruning for video compression using example-based super-resolution
Nguyen et al. A novel reversible data hiding scheme based on difference-histogram modification and optimal EMD algorithm
CN107241597B (en) A kind of reversible information hidden method of combination quaternary tree adaptive coding
WO2015180052A1 (en) Video coding and decoding methods and apparatuses based on dictionary database
Lin et al. LMQFormer: A laplace-prior-guided mask query transformer for lightweight snow removal
CN108171325B (en) Time sequence integration network, coding device and decoding device for multi-scale face recovery
Sharieff et al. Intelligent framework for joint data hiding and compression using SMVQ and fast local image in-painting
CN111327901B (en) Video encoding method, device, storage medium and encoding equipment
CN104063855A (en) Super-resolution image reconstruction method and device based on classified dictionary database
Feng et al. Conv2NeXt: Reconsidering Conv NeXt Network Design for Image Recognition
CN108024113B (en) Target ratio self-adaptive compressed domain small target tracking method
Yang et al. Improving visual quality of reversible data hiding in medical image with texture area contrast enhancement
Li et al. H-vfi: Hierarchical frame interpolation for videos with large motions
CN110853040B (en) Image collaborative segmentation method based on super-resolution reconstruction
Dadsena et al. Difference-histogram modification based on reversible data hiding
Ma et al. New high-performance reversible data hiding method for VQ indices based on improved locally adaptive coding scheme
Tseng et al. Reversible data hiding scheme for colour images based on pixel clustering and histogram shifting
Lin et al. Differential Direction Adaptive Based Reversible Information Hiding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant