CN103841410A - Half reference video QoE objective evaluation method based on image feature information - Google Patents

Half reference video QoE objective evaluation method based on image feature information Download PDF

Info

Publication number
CN103841410A
CN103841410A CN201410079834.6A CN201410079834A CN103841410A CN 103841410 A CN103841410 A CN 103841410A CN 201410079834 A CN201410079834 A CN 201410079834A CN 103841410 A CN103841410 A CN 103841410A
Authority
CN
China
Prior art keywords
video
conspicuousness
texture information
information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410079834.6A
Other languages
Chinese (zh)
Other versions
CN103841410B (en
Inventor
李文璟
喻鹏
罗千
耿杨
嵇华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201410079834.6A priority Critical patent/CN103841410B/en
Publication of CN103841410A publication Critical patent/CN103841410A/en
Application granted granted Critical
Publication of CN103841410B publication Critical patent/CN103841410B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a half reference video QoE objective evaluation method based on image feature information. The method includes the steps that a significance information graph and a texture information graph of each frame of image in an original video are extracted by an operator terminal, the significance information graphs and the texture information graphs are compressed, and half reference data of the original video are obtained; the half reference data of the original video and a damaged video are transmitted from the operator terminal and then are received by a user terminal, a significance information graph and a texture information graph of each frame of image in the damaged video are extracted, half reference data of the damaged video are obtained, the damaged condition of the damaged video is calculated according to the half reference data of the original video and the half reference data of the damaged video, and the subjective perception quality MOS is evaluated by using a neural network algorithm which is well trained in advance.

Description

Based on half reference video QoE objective evaluation method of image feature information
Technical field
The present invention relates to communication technical field, be specifically related to the half reference video QoE objective evaluation method based on image feature information.
Background technology
Along with popularizing of wireless network and high-speed wideband access, development fast is just being experienced in real-time video service.QoE(Quality of Experience) class index can reflect the service quality of real-time video traffic, the objective method (also referred to as QoE objective evaluation method) of the QoE quality evaluation of real-time video traffic is for to assess subjective scoring according to specific objective quality of service index, QoE objective evaluation method can be divided three classes according to the service condition to original video data again, is respectively that full reference (needing whole initial data), half is with reference to (needing part initial data) with without reference (not needing initial data).
Existing QoE objective evaluation method majority be complete with reference to or without reference method, full reference method can obtain assessment result the most accurately, but be difficult to application; Be easy to dispose without reference method, but be conventionally only applicable to specifically damage scene; Although half reference method can be obtained between the two better balance, lacks ripe scheme.
The problem that prior art exists is: full reference method or all have limitation without reference method in practicality, and the prior art lateral comparison result of not testing assessing accuracy.
Summary of the invention
There is limitation in technical problem prior art to be solved by this invention, and the lateral comparison result of assessment accuracy not being tested in practicality.
For this purpose, the present invention proposes the half reference video QoE objective evaluation method based on image feature information, and the method comprises:
Operator's end extracts conspicuousness hum pattern and the texture information figure of each two field picture in original video, and described conspicuousness hum pattern and texture information figure are processed in compression, obtains half reference data of original video;
Half reference data and the impaired video of the original video transmitting held by user side reception operator, extract conspicuousness hum pattern and the texture information figure of each two field picture in impaired video, obtain half reference data of impaired video, according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video, use the good neural network algorithm of training in advance to assess subjective feeling mass M OS, wherein, described impaired video is the original video through operator's end of Erasure channel transmission.
Wherein, described conspicuousness hum pattern comprises time domain conspicuousness hum pattern and spatial domain conspicuousness hum pattern.
Wherein, described conspicuousness hum pattern comprises strength component, color component, durection component and the skin color component of weighted.
Wherein, described texture information figure comprises time domain texture information figure and spatial domain texture information figure.
Wherein, the extraction of the texture information figure of described each two field picture comprises: edge extracting, morphological dilations are processed and add folded;
Wherein, described edge extracting comprises: the marginal information image that extracts current frame image;
Described morphological dilations comprises: the marginal information image of current frame image is carried out to morphological dilations processing, obtain marginal information image after treatment;
Describedly add stacked package and draw together: marginal information image after treatment and current frame image are added folded, obtain the texture information figure of current frame image.
Wherein, the texture information figure of described each two field picture comprises: the texture information figure of each two field picture in the impaired video of texture information figure, user side of each two field picture in operator's end original video.
Wherein, described compression processing comprises:
Adopt wavelet transformation to decompose conspicuousness hum pattern and the texture information figure of described spatial domain and time domain two aspects, obtain different high-frequency sub-band;
Make the histogram of all high-frequency sub-band;
Adopt histogram the digital simulation error of all high-frequency sub-band of extensive Gaussian Profile GGD matching.
Wherein, described half reference data comprises: the damage of conspicuousness information spatial domain, the damage of conspicuousness information time domain, the damage of texture information spatial domain and the damage of texture information time domain.
Wherein, described according to half reference data of original video and impaired video, the degree of impairment that calculates impaired video comprises: according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video by relative entropy.
Than prior art, the beneficial effect of method provided by the invention is: real-time video traffic QoE objective quality assessment method provided by the invention is insensitive to the type of video impairment, and the damage video also different reasons being caused can obtain assessment result comparatively accurately; The present invention is insensitive to bottom transmission network, can be for real-time video traffic being carried out to objective quality assessment under multiple actual scene (comprising local area network (LAN), wide area network and wireless environment etc.); The present invention is easy to dispose and realize, and all functions of modules all can realize at software view, if there is particular demands, also can consider to realize with speed up processing with hardware mode.
Brief description of the drawings
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, to the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skill in the art, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 shows the half reference video QoE objective evaluation method flow diagram based on image feature information;
Fig. 2 shows the result figure in embodiment 2, LVQ database being assessed.
Embodiment
For making object, technical scheme and the advantage of the embodiment of the present invention clearer, below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly described, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiment.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
Embodiment 1:
The present embodiment discloses a kind of half reference video QoE objective evaluation method based on image feature information, and the method comprises:
Operator's end extracts conspicuousness hum pattern and the texture information figure of each two field picture in original video, and described conspicuousness hum pattern and texture information figure are processed in compression, obtains half reference data of original video;
Half reference data and the impaired video of the original video transmitting held by user side reception operator, extract conspicuousness hum pattern and the texture information figure of each two field picture in impaired video, obtain half reference data of impaired video, according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video, use the good neural network algorithm of training in advance to assess subjective feeling mass M OS, wherein, described impaired video is the original video through operator's end of Erasure channel transmission.
Wherein, described conspicuousness hum pattern comprises time domain conspicuousness hum pattern and spatial domain conspicuousness hum pattern.
Wherein, described conspicuousness hum pattern comprises strength component, color component, durection component and the skin color component of weighted, wherein, skin color component weight is made as to 2, and all the other component weights are 1, also can adjust according to actual.
Wherein, described texture information figure comprises time domain texture information figure and spatial domain texture information figure.
Wherein, the extraction of the texture information figure of described each two field picture comprises: edge extracting, morphological dilations are processed and add folded;
Wherein, described edge extracting comprises: the marginal information image that extracts current frame image;
Described morphological dilations comprises: the marginal information image of current frame image is carried out to morphological dilations processing, obtain marginal information image after treatment;
Describedly add stacked package and draw together: marginal information image after treatment and current frame image are added folded, obtain the texture information figure of current frame image.
Wherein, the texture information figure of described each two field picture comprises: the texture information figure of each two field picture in the impaired video of texture information figure, user side of each two field picture in operator's end original video.
At operator's end, described compression processing comprises:
Adopt wavelet transformation to decompose conspicuousness hum pattern and the texture information figure of described spatial domain and time domain two aspects, obtain different high-frequency sub-band;
Make the histogram of all high-frequency sub-band;
Adopt histogram the digital simulation error of all high-frequency sub-band of extensive Gaussian Profile GGD matching.
Wherein, described half reference data comprises: the damage of conspicuousness information spatial domain, the damage of conspicuousness information time domain, the damage of texture information spatial domain and the damage of texture information time domain.
Wherein, described according to half reference data of original video and impaired video, the degree of impairment that calculates impaired video comprises: according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video by relative entropy.
Embodiment 2:
The present embodiment discloses a kind of half reference video QoE objective evaluation method based on image feature information, as shown in Figure 1, the present invention carries out half method with reference to QoE objective quality assessment to real-time video traffic and is mainly divided into 11 steps, can be divided into operator's end and two parts of user side, wherein operator's end comprises 5 steps, and user side comprises 6 steps.Its overview flow chart as shown in Figure 1, is introduced respectively each step below.
Operator's end:
101) each frame in original video is extracted respectively to its conspicuousness information.What conspicuousness was described is the relative more region of arresting power in a sub-picture.First, built conspicuousness component from intensity, color, direction and skin color 4 aspects respectively, then according to different weights, 4 components have been merged into a secondary Saliency maps.Wherein, calculate and need to apply face recognition technology before the conspicuousness component of skin color and detect and whether really have portrait.The Saliency maps obtaining is that in former figure, each pixel has been distributed a conspicuousness value, and this pixel region of the higher expression of value more attracts human eye attentiveness.
Concrete computational process is described below.First, setting up scale parameter for each two field picture of input is 9 gaussian pyramid (Gaussian pyramids), and establishes central yardstick ce{2 wherein, 3,4}, periphery yardstick s=c+ δ, wherein δ ∈ { 3,4}.Definition is to the difference computing between two width different scale images
Figure BDA0000473320680000061
for: thick scalogram is carried out subtracting each other by pixel after thin yardstick interpolation.If r, g and b are respectively the three primary color components of original image, intensity image I is I=(r+g+b)/3.In addition, definition broad tuning color channel R=r-(g+b)/2, G=g-(r+b)/2, B=b-(r+g)/2, and Y=(r+g)/2-|r-g|/2-b.Above I, R, G, B and Y is for multiple dimensioned.Below, definition strength characteristic image is:
Definition color character image is:
Figure BDA0000473320680000063
Next, need to use Gabor filter at direction 6E respectively (0 °, 45 °, 90 °, 135 °) to the intensity image I of each yardstick, obtain Gabor pyramid 0 (δ, θ), definition direction character image is:
Figure BDA0000473320680000065
For obtaining each conspicuousness component, each characteristic image above need to be merged.Concrete union operation is defined as , for the input picture of two different scales, by they all scaling after yardstick 4, carry out by pixel addition.Thus, the conspicuousness component of intensity, color and direction three aspects: calculates to (7) by formula (5) respectively, wherein
Figure BDA00004733206800000610
normalizing operator:
Figure BDA0000473320680000067
In addition, when existing, face needs the conspicuousness of extra computation skin pixels component if detected
Figure BDA0000473320680000071
by the degree of closeness of pixel in original image and skin color being come to distribute weights for each pixel value.Finally, by each conspicuousness component according to predefined weights W iafter merging, just obtain single frames Saliency maps picture:
Figure BDA0000473320680000072
Calculate conspicuousness by each frame in video, and former figure is weighted to processing with the Saliency maps picture obtaining, just can obtain the weighting conspicuousness information of image, the calculating of this step is provided by formula (9), and wherein, p describes original video, S pthe Saliency maps picture that represents the each frame extracting from original video, multiplication carries out for the each pixel of two width images, and i represents certain frame, F represents totalframes, SWS represents spatial domain conspicuousness weighted image, and wherein, SWS represents respectively Saliency, Weighted and Spatial.
SWS ptwo (SWS p(i) | SWS p(i) two S p(i) * Original_Video (i), i ∈ F) (9)
102) each frame in original video is extracted respectively to its texture information.Texture information can be assessed the sensitivity of regional to damage according to the edge situation in image.This is the hypothesis based on below: the damage occurring in the region of texture complexity in image wants the smooth region of colorimetric leveling to be more difficult to be discovered.In this programme, first use Gauss's Laplace filter to extract marginal information wherein from original image, then edge image being made to morphological dilations processes and each pixel value negate is obtained to texture maps, finally will use equally this texture maps to do weighting processing to former figure, just obtain the texture information of image.The wherein edges cover of the region of texture complexity after can being inflated, thus can the image lesion of the level and smooth tone part of reevaluating.
Concrete computational process is described below.First, for extracting Gauss's Laplace filter at edge, be first image to be made to gaussian filtering, re-use Laplacian and obtain edge image.Gaussian filtering operation is provided by formula (10).
L(x,y;t)=g(x,y;t)*f(x,y) (10)
Wherein L is filtering result, and f (x, y) is the pixel value of locating at (x, y) of video one two field picture, g (x, y; T) be Gaussian function, described by formula (11), t is wherein filter scale.
g ( x , y ; t ) = 1 2 π e - ( x 2 + y 2 ) / 2 t - - - ( 11 )
Laplacian is provided by formula (12).
▿ 2 f ( x , y ) = f ( x + 1 , y ) + f ( x - 1 , y ) + f ( x , y + 1 ) + f ( x , y - 1 ) - 4 f ( x , y ) - - - ( 12 )
Morphological dilations is processed and is provided by formula (13), and wherein A is input picture, and B is that length and width are 8 rectangle description.
dilation(A,B)=(a+b|a∈A,b∈B) (13)
In sum, the computational process of extraction single frames texture image is provided by formula (14).
Figure BDA0000473320680000082
By each frame in video is calculated to texture, and former figure is weighted to processing with the texture image obtaining, just can obtain the weighting texture information of image, wherein for lesion assessment, the region of key is all with highlighted demonstration especially, and all the other regions all present with grey low key tone.The calculating of this step is provided by formula (15).Wherein
Figure BDA0000473320680000083
the texture image that represents the each frame extracting from original video, TWS represents spatial domain texture weighted image, tri-letters of TWS represent respectively Texture, Weighted and Spatial.
Figure BDA0000473320680000084
103) this step needs 101) and 102) conspicuousness of each frame of calculating and texture information be as input.In the time that the QoE of video traffic is implemented in assessment, the degree of impairment of spatial domain and time domain is all extremely important for subjective feeling.This programme has calculated respectively corresponding spatial domain and time domain impairment value to conspicuousness information and texture information.Specifically, the assessment of aspect, spatial domain has used the conspicuousness of each frame and texture information 101) and 102) result that directly obtains, the assessment of time domain aspect is conspicuousness and the texture information difference of having used between consecutive frame.Therefore,, for each video, altogether need to assess degree of impairment from 4 aspects.The calculating of this step is provided by formula (16) and (17), and SWT represents time domain conspicuousness weighted image, and SWT represents respectively Saliency, Weighted and Temporal.TWT represents time domain texture weighted image, and TWT represents respectively Texture, Weighted and Temporal.
SWT P=(SWT P(i)|SWT P(i)=abs(SWS P(i)-SWS P(i-1)),i∈F) (16)
TWT p=(TWT p(i) | TWT p(i)=abs (TWS p(i)-TWS p(i mono-is l)), ieF} (17)
104) by 103) spatial domain obtaining and the conspicuousness of time domain and texture information data volume huge, cannot be directly through Internet Transmission, therefore need to compress processing.Wherein 104) what describe is the step of wavelet transformation.The information of 4 aspects that obtain for each frame in original video, is 3 as scale parameter respectively, and the Steerable Pyramid that direction number is 2 decomposes, and obtains altogether 6 different high-frequency sub-band for assessment of image lesion.Subsequently, make each high-frequency sub-band histogram, obtain describing the P of original video.The calculating of this step is provided by formula (18).Wherein Wavelet represents to do foregoing wavelet transformation, and Hist is for making the histogram of each high-frequency sub-band.
P=Hist(Wauelet(SWS P,TWS P,SWT P,TWT P)) (18)
105) this step need to be used extensive Gaussian Profile (Generalized Gaussian Distribution, GGD) each high-frequency sub-band histogram is carried out to matching.GGD function is only determined curve shape by α and two parameters of β, and can be well and the matching of high-frequency sub-band histogram phase, and its definition is provided by formula (19) and (20).Meanwhile, the error ε of generation while also needing to use relative entropy (being called again KL divergence) formula digital simulation curve carries out matching to high-frequency sub-band histogram.Specifically, first obtain by formula (21) the approximate histogram that GGD matching histogram obtains and describe P m, then calculate P by formula (22) mfor the relative entropy ε of P.Finally, transformation parameter corresponding to each high-frequency sub-band comprises α, β and ε, and wherein α value is 11 bit floating numbers (8 mantissa, 3 indexes), β value 8 bits, ε value 8 bits, altogether 27 bits.And for each frame of video, the each correspondence of the degree of impairment of 4 aspects 6 high-frequency sub-band, also altogether need to transmit 27*4*6=648 bit, if the FPS of video is 30, total bandwidth occupancy is 648*30/8=2.43KB/s so.This is complete acceptable for half reference video QoE objective quality assessment.Half reference data of these former videos need to be used harmless auxiliary channel to transmit.
p ( x ) = β 2 αΓ ( 1 / β ) e ( - | x | / α ) β - - - ( 19 )
Γ ( z ) = ∫ 0 ∞ t z - 1 e t dt - - - ( 20 )
P m=GGD_Fitting(P) (21)
ϵ = D KL ( P m | | P ) = Σ i ln ( P m ( i ) P ( i ) ) P m ( i ) - - - ( 22 )
User side:
106) except being input as impaired video and Saliency maps picture thereof, other parts and 101) identical.The calculating of this step is provided by formula (23).Wherein Q describes impaired video,
Figure BDA0000473320680000102
represent the conspicuousness information of the each frame extracting from impaired video.
107) except being input as impaired video and texture image thereof, other parts and 102) identical.The calculating of this step is provided by formula (24).Wherein
Figure BDA0000473320680000104
represent the texture information of the each frame extracting from impaired video.
Figure BDA0000473320680000105
108) except being input as 106) and 107) data, other are with 103) identical.The calculating of this step is provided by formula (25) and (26).
Figure BDA0000473320680000106
Figure BDA0000473320680000107
109) except being input as 108) data, other are with 104) identical.The calculating of this step is provided by formula (27).
Figure BDA0000473320680000108
110) this step needs 109) and 105) data as input.First, need according to receiving to obtain the histogram of each α in half reference data, high-frequency sub-band that each frame of β value reconstruction is corresponding; Then, calculate concrete impairment value by formula (28), wherein P represents each histogram that original video is corresponding, and Q represents each histogram that impaired video is corresponding, P mrepresent the each approximate histogram that GGD matching obtains, ε=DKL (P m|| P).Finally, for each video, will calculate 4 impairment values, i.e. Distortion in formula (28), respectively corresponding spatial domain conspicuousness weighted injury, spatial domain texture weighted injury, time domain conspicuousness weighted injury and time domain texture weighted injury.
111) last, according to 110) calculate video impairment situation, use the good neural network algorithm of training in advance to assess subjective feeling quality (MOS).The calculating of this step is provided by formula (29).
Figure BDA0000473320680000111
The beneficial effect of the embodiment of the present invention is:
Consider when human eye is watched video, concern be only subregion in video image and non-integral, the video impairment therefore occurring at key area will especially affect subjective feeling.For this point, this programme, in conjunction with visual model, has been asked for the conspicuousness information of image from color, intensity, direction and 4 aspects of skin color.Wherein, in the time determining whether to consider skin color, first apply face recognition technology and judged in image, whether there is portrait, and the conspicuousness component of extra computation skin color in the time that portrait exists only, make thus the accurate performance of calculating of conspicuousness information be guaranteed, and then improve the order of accuarcy of lesion assessment.
The edge situation that image itself exists, can be used to judge the sensitivity of image to damage.Specifically, intensive when image edges of regions, while being also texture complexity, if occurred, damage is also difficult to be discovered by human eye; On the contrary, if damage appears at the region that tone is level and smooth, be easy to be captured by human eye.Accordingly, this programme has extracted image edge information, hide near the pixel in edge subsequently, thereby reevaluating is being carried out in the damage that can occur pixel smooth region, thereby improve the order of accuarcy of lesion assessment by morphological dilations method.
For realizing appraisal procedure half referenceization, use the mode of wavelet transformation join probability fitting of distribution to realize data compression, and calculate concrete impairment value by relative entropy.The method can be in guaranteeing to assess accuracy, only takies few additional transmissions bandwidth.
Embodiment 3:
At disclosed Subjective video quality database LIVE Video Quality Database(LVQD) upper, the accuracy of the present invention's assessment to be tested, test result is as shown in Figure 2.LVQD has comprised 10 groups of videos, every group comprises 1 original (can't harm) video, and respectively from 15 impaired videos that H.264 compressive damage, MPEG2 compressive damage, IP transmission impairment and wireless transmission damage angle build, therefore whole database has comprised 150 impaired videos altogether.To each impaired video, carry out subjective scoring according to ITU-R BT.500-11 standard regulation.What subjective scoring used is single stimulating course, and rating staff provides score value on 0 to 100 continuum.38 rating staff have been invited in experiment altogether, wherein 9 parts of scorings be judged as according to standard invalid and disallowable.After being processed according to standard, remaining 29 parts of scorings just obtain 150 MOS and scoring variances that impaired video is corresponding.
Use this method to carry out QoE objective quality assessment to the impaired video of LVQD, its result as shown in Figure 2.As shown in Table 1 and Table 2, what wherein table 1 was listed is Pearson's coefficient correlation to the result entirely contrasting with reference to QoE objective quality assessment method with other main flows, and what table 2 was listed is Spearman's correlation coefficient.Although what this patent proposed is half reference method, it is unfavorable in the time contrasting with the full reference method that can use whole original video data, to exist, and actual comparing result shows that this method is still having stronger competitiveness aspect assessment accuracy.
Table 1.LVQD assessment result---Pearson's coefficient correlation
Figure BDA0000473320680000121
Table 2.LVQD assessment result---Spearman's correlation coefficient
Although described by reference to the accompanying drawings embodiments of the present invention, but those skilled in the art can make various modifications and variations without departing from the spirit and scope of the present invention, such amendment and modification all fall into by within claims limited range.

Claims (9)

1. the half reference video QoE objective evaluation method based on image feature information, is characterized in that, the method comprises:
Operator's end extracts conspicuousness hum pattern and the texture information figure of each two field picture in original video, and described conspicuousness hum pattern and texture information figure are processed in compression, obtains half reference data of original video;
Half reference data and the impaired video of the original video transmitting held by user side reception operator, extract conspicuousness hum pattern and the texture information figure of each two field picture in impaired video, obtain half reference data of impaired video, according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video, use the good neural network algorithm of training in advance to assess subjective feeling mass M OS, wherein, described impaired video is the original video through operator's end of Erasure channel transmission.
2. method according to claim 1, is characterized in that, described conspicuousness hum pattern comprises time domain conspicuousness hum pattern and spatial domain conspicuousness hum pattern.
3. method according to claim 1, is characterized in that, described conspicuousness hum pattern comprises strength component, color component, durection component and the skin color component of weighted.
4. method according to claim 1, is characterized in that, described texture information figure comprises time domain texture information figure and spatial domain texture information figure.
5. method according to claim 1, is characterized in that, the extraction of the texture information figure of described each two field picture comprises: edge extracting, morphological dilations are processed and add folded;
Wherein, described edge extracting comprises: the marginal information image that extracts current frame image;
Described morphological dilations comprises: the marginal information image of current frame image is carried out to morphological dilations processing, obtain marginal information image after treatment;
Describedly add stacked package and draw together: marginal information image after treatment and current frame image are added folded, obtain the texture information figure of current frame image.
6. method according to claim 5, is characterized in that, the texture information figure of described each two field picture comprises: the texture information figure of each two field picture in the impaired video of texture information figure, user side of each two field picture in operator's end original video.
7. method according to claim 1 and 2, is characterized in that, described compression processing comprises:
Adopt wavelet transformation to decompose conspicuousness hum pattern and the texture information figure of described spatial domain and time domain two aspects, obtain different high-frequency sub-band;
Make the histogram of all high-frequency sub-band;
Adopt histogram the digital simulation error of all high-frequency sub-band of extensive Gaussian Profile GGD matching.
8. method according to claim 1, is characterized in that, described half reference data comprises: the damage of conspicuousness information spatial domain, the damage of conspicuousness information time domain, the damage of texture information spatial domain and the damage of texture information time domain.
9. method according to claim 1, it is characterized in that, described according to half reference data of original video and impaired video, the degree of impairment that calculates impaired video comprises: according to half reference data of original video and impaired video, calculate the degree of impairment of impaired video by relative entropy.
CN201410079834.6A 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information Expired - Fee Related CN103841410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410079834.6A CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410079834.6A CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Publications (2)

Publication Number Publication Date
CN103841410A true CN103841410A (en) 2014-06-04
CN103841410B CN103841410B (en) 2016-05-04

Family

ID=50804492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410079834.6A Expired - Fee Related CN103841410B (en) 2014-03-05 2014-03-05 Based on half reference video QoE objective evaluation method of image feature information

Country Status (1)

Country Link
CN (1) CN103841410B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104113788A (en) * 2014-07-09 2014-10-22 北京邮电大学 QoE training and assessment method and system of TCP video stream service
CN106651829A (en) * 2016-09-23 2017-05-10 中国传媒大学 Non-reference image objective quality evaluation method based on energy and texture analysis
CN107657251A (en) * 2016-07-26 2018-02-02 阿里巴巴集团控股有限公司 Determine the device and method of identity document display surface, image-recognizing method
CN109801266A (en) * 2018-12-27 2019-05-24 西南技术物理研究所 A kind of image quality measure system of wireless image data-link
CN110324613A (en) * 2019-07-30 2019-10-11 华南理工大学 A kind of deep learning image evaluation method towards video transmission quality
CN110599468A (en) * 2019-08-30 2019-12-20 中国信息通信研究院 No-reference video quality evaluation method and device
CN111242936A (en) * 2020-01-17 2020-06-05 苏州瓴图智能科技有限公司 Non-contact palm herpes detection device and method based on image
CN113011270A (en) * 2021-02-23 2021-06-22 中国矿业大学 Coal mining machine cutting state identification method based on vibration signals

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006099743A1 (en) * 2005-03-25 2006-09-28 Algolith Inc. Apparatus and method for objective assessment of dct-coded video quality with or without an original video sequence
US20070103551A1 (en) * 2005-11-09 2007-05-10 Samsung Electronics Co., Ltd. Method and system for measuring video quality
KR20080029371A (en) * 2006-09-29 2008-04-03 광운대학교 산학협력단 Method of image quality evaluation, and system thereof
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 Method for evaluating video quality based on artificial neural net
CN101426150A (en) * 2008-12-08 2009-05-06 青岛海信电子产业控股股份有限公司 Video image quality evaluation method and system
CN101448175A (en) * 2008-12-25 2009-06-03 华东师范大学 Method for evaluating quality of streaming video without reference
KR20100109345A (en) * 2009-03-30 2010-10-08 한국전자통신연구원 Apparatus and method for extracting and decision-making of spatio-temporal feature in broadcasting and communication systems
JP2011186715A (en) * 2010-03-08 2011-09-22 Nk Works Kk Method and photographic image device evaluation
CN102496162A (en) * 2011-12-21 2012-06-13 浙江大学 Method for evaluating quality of part of reference image based on non-tensor product wavelet filter
CN103281555A (en) * 2013-04-24 2013-09-04 北京邮电大学 Half reference assessment-based quality of experience (QoE) objective assessment method for video streaming service

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006099743A1 (en) * 2005-03-25 2006-09-28 Algolith Inc. Apparatus and method for objective assessment of dct-coded video quality with or without an original video sequence
US20070103551A1 (en) * 2005-11-09 2007-05-10 Samsung Electronics Co., Ltd. Method and system for measuring video quality
KR20080029371A (en) * 2006-09-29 2008-04-03 광운대학교 산학협력단 Method of image quality evaluation, and system thereof
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 Method for evaluating video quality based on artificial neural net
CN101426150A (en) * 2008-12-08 2009-05-06 青岛海信电子产业控股股份有限公司 Video image quality evaluation method and system
CN101448175A (en) * 2008-12-25 2009-06-03 华东师范大学 Method for evaluating quality of streaming video without reference
KR20100109345A (en) * 2009-03-30 2010-10-08 한국전자통신연구원 Apparatus and method for extracting and decision-making of spatio-temporal feature in broadcasting and communication systems
JP2011186715A (en) * 2010-03-08 2011-09-22 Nk Works Kk Method and photographic image device evaluation
CN102496162A (en) * 2011-12-21 2012-06-13 浙江大学 Method for evaluating quality of part of reference image based on non-tensor product wavelet filter
CN103281555A (en) * 2013-04-24 2013-09-04 北京邮电大学 Half reference assessment-based quality of experience (QoE) objective assessment method for video streaming service

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHOU WANG,EERO P. SIMONCELLI: "Reduced-Reference Image Quality Assessment Using A Wavelet-Domain Natural Image Statistic Model", 《HUMAN VISION AND ELECTRONIC IMAGING X, PROC. SPIE》, vol. 5666, 20 January 2005 (2005-01-20), XP055186956, DOI: doi:10.1117/12.597306 *
冯欣: "基于视觉显著性的网络丢包图像和视频的客观质量评估方法研究", 《中国博士学位论文全文数据库 信息科技辑》, 15 December 2011 (2011-12-15) *
杨艳: "基于多特征类型的无线视频质量用户体验(QoE)方法研究", 《中国博士学位论文全文数据库 信息科技辑》, 15 January 2013 (2013-01-15) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104113788A (en) * 2014-07-09 2014-10-22 北京邮电大学 QoE training and assessment method and system of TCP video stream service
CN104113788B (en) * 2014-07-09 2017-09-19 北京邮电大学 A kind of QoE training of TCP video stream traffics and the method and system assessed
CN107657251A (en) * 2016-07-26 2018-02-02 阿里巴巴集团控股有限公司 Determine the device and method of identity document display surface, image-recognizing method
CN106651829A (en) * 2016-09-23 2017-05-10 中国传媒大学 Non-reference image objective quality evaluation method based on energy and texture analysis
CN106651829B (en) * 2016-09-23 2019-10-08 中国传媒大学 A kind of non-reference picture method for evaluating objective quality based on energy and texture analysis
CN109801266A (en) * 2018-12-27 2019-05-24 西南技术物理研究所 A kind of image quality measure system of wireless image data-link
CN110324613A (en) * 2019-07-30 2019-10-11 华南理工大学 A kind of deep learning image evaluation method towards video transmission quality
CN110599468A (en) * 2019-08-30 2019-12-20 中国信息通信研究院 No-reference video quality evaluation method and device
CN111242936A (en) * 2020-01-17 2020-06-05 苏州瓴图智能科技有限公司 Non-contact palm herpes detection device and method based on image
CN113011270A (en) * 2021-02-23 2021-06-22 中国矿业大学 Coal mining machine cutting state identification method based on vibration signals

Also Published As

Publication number Publication date
CN103841410B (en) 2016-05-04

Similar Documents

Publication Publication Date Title
CN103841410B (en) Based on half reference video QoE objective evaluation method of image feature information
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
Gu et al. Learning a no-reference quality assessment model of enhanced images with big data
Liu et al. No-reference image quality assessment in curvelet domain
CN109978854B (en) Screen content image quality evaluation method based on edge and structural features
CN109255358B (en) 3D image quality evaluation method based on visual saliency and depth map
CN108830823B (en) Full-reference image quality evaluation method based on spatial domain combined frequency domain analysis
CN105631455A (en) Image main body extraction method and system
Balanov et al. Image quality assessment based on DCT subband similarity
CN103426173B (en) Objective evaluation method for stereo image quality
CN111612741B (en) Accurate reference-free image quality evaluation method based on distortion recognition
CN104036493B (en) No-reference image quality evaluation method based on multifractal spectrum
CN114937232B (en) Wearing detection method, system and equipment for medical waste treatment personnel protective appliance
CN103366378A (en) Reference-free type image quality evaluation method based on shape consistency of condition histogram
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN103778636A (en) Feature construction method for non-reference image quality evaluation
CN109919166B (en) Method and device for acquiring classification information of attributes
CN104268590A (en) Blind image quality evaluation method based on complementarity combination characteristics and multiphase regression
CN110570435A (en) method and device for carrying out damage segmentation on vehicle damage image
CN115100077B (en) Image enhancement method and device
CN106934770A (en) A kind of method and apparatus for evaluating haze image defog effect
CN110163837A (en) The recognition methods of video noise, device, equipment and computer readable storage medium
CN105721863B (en) Method for evaluating video quality
US20140267915A1 (en) System and method for blind image deconvolution
CN110135274B (en) Face recognition-based people flow statistics method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160504

CF01 Termination of patent right due to non-payment of annual fee