CN106341677B - Virtual view method for evaluating video quality - Google Patents
Virtual view method for evaluating video quality Download PDFInfo
- Publication number
- CN106341677B CN106341677B CN201510395100.3A CN201510395100A CN106341677B CN 106341677 B CN106341677 B CN 106341677B CN 201510395100 A CN201510395100 A CN 201510395100A CN 106341677 B CN106341677 B CN 106341677B
- Authority
- CN
- China
- Prior art keywords
- time domain
- distortion
- video
- space
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Analysis (AREA)
Abstract
Virtual view method for evaluating video quality counts the temporal flicker distortion of virtual view video in units of all pixels in Space-time domain unit, avoid pixel to caused by the temporal flicker distortion computation pattern of pixel to the erroneous estimation of virtual view video human eye perceptual distortion.When the temporal flicker distortion to virtual view video calculates, both considered the distortion since depth map error tape comes it is also contemplated that the distortion that left and right viewpoint texture image introduces.The above method can effectively assess the temporal flicker distortion to make a big impact in virtual view video to subjective quality, so as to more meet the result of human eye subjective perception when evaluating virtual view video quality so that virtually so that video quality evaluation is more accurate, comprehensive.
Description
Technical field
The present invention relates to video quality evaluation technology, more particularly to a kind of accurate, comprehensive virtual view video quality
Evaluation method.
Background technology
With the development of 3D video techniques, more and more films and TV programme begin to use 3D technology to shoot, various
3D display device and 3D TVs are also gradually being popularized, it is anticipated that trend be that following 3D video councils become mainstream.
At present International Standards Organization MPEG (Moving Picture Experts Group, Motion Picture Experts Group) and
ITU-T VCEG(International Telecommunication Union-Telecommunication
Standardization Sector Video Coding Experts Group, International Telecommunication Union's communication standardization tissue
Video Coding Experts Group) the 3D video encoding standards based on depth information have been worked out jointly.In the 3D video solution party of the standard
In case, coding side only needs the road of coding transmission 2 to 3 color texture video and corresponding depth map video, and decoding end can profit
The virtual of any intermediate-view regards between generating two viewpoints with the two-way texture video received and corresponding depth map video
Point video, the video that may finally form 8 to 9 even more viewpoints are used to meet that the broadcasting of bore hole multiple views 3D display device will
Ask.Therefore, the quality of the virtual view video based on depth map generation plays conclusive shadow to final 3D video qualities
Ring.So the optimization matter that quality evaluation will determine whole 3D Video processings and coded system how is carried out to 3D virtual views video
Amount.
The evaluation method of video quality is divided into two major classes:Subjective quality assessment and evaluating objective quality.Subjective quality assessment
Mainly by organizing beholder to watch video and carrying out subjective quality marking, although this method can obtain accurate video quality
Score still takes time and effort, and can not be applied in real-time processing system for video.In contrast, objective video quality is evaluated
Estimate the quality of video automatically by algorithm, not only saved manpower but also can carry out in real time.
Currently for the evaluating objective quality of traditional 2D videos, there are many achievements in research both at home and abroad, can be preferable
Noise distortion in video, transmission distortion, fuzzy distortion etc. are estimated.For grinding for 3D virtual view video quality evaluations
Study carefully also fewer.The work of existing research virtual view video objective quality evaluation is mostly based on image fault at present, i.e., single
Solely calculate the distortion of each two field picture and then the distortion using average as whole video sequence.
A kind of scheme is that the noise in time domain distortion to static background region in virtual view video counts.Its scheme
When the pixel of static background region should be belonged in virtual view video there occurs brightness value change, and to change size and exceeding
Human eye just can perceptual distortion threshold value (JND, Just Noticeable Distortion) when, the change of the brightness value is denoted as
The noise in time domain of video, and evaluate with this quality of virtual view video.
A kind of scheme is the distortion at contour of object edge in statistics virtual view video.Specific implementation step is, to original
Reference video and the picture material of virtual view video do edge detection, extract edge contour figure.Then by two width edge wheels
Wide figure is compared, count edge distortion pixel shared by ratio and in this, as evaluation virtual view video distortion size according to
According to.Or before the image border contour distortion of virtual view video is calculated, first to classify to each image block.Its side
Case is, image block classification is flat block, edge block and texture block by the percentage according to image intra-block edge pixel, each type
Image block different weighted values is had in calculated distortion.
Scheme also is that original reference video and virtual view video first are decomposed wavelet field, then in wavelet field
Horizontal alignment operation is carried out to two videos.The purpose for carrying out horizontal alignment operation is the overall water of object in virtual view video
It is flat to deviate the perceived quality for not interfering with human eye, but the evaluating objective quality score of the video can be caused relatively low.When
The result of human eye subjective perceptual quality will more be met by carrying out objective quality distortion computation again after horizontal alignment.Or
The object in virtual view video there occurs horizontal direction displacement is first subjected to bit shift compensation, i.e. virtual view before calculated distortion
In the object horizontal displacement mistake caused by depth map distortion the perceived quality of human eye can't be caused to be decreased obviously because thing
Body is global displacement and structure distortion is not present, so first to there occurs the object of horizontal displacement before objective distortion is calculated
Bit shift compensation is carried out, distortion will not be thus calculated as, so as to more realistically reflect the perception matter of virtual view video
Amount.
Above-mentioned virtual view video quality evaluation scheme has the disadvantages that:The time domain in virtual view video is not accounted for
Flicker distortion.Depth map provides required depth information for generation virtual view video, but in compression, transmission and processing rank
Section, depth map inevitably produce distortion, which can cause the geometric distortion of virtual visual point image.When image is in video
In the middle during continuous broadcasting, irregular geometric distortion will form temporal flicker distortion.
Pixel is not suitable for the distortion computation pattern of pixel the quality for evaluating virtual view video.It is most normal at present
The image and video distortion calculation criterion seen are PSNR (Y-PSNR, Peak Signal to Noise Ratio) and MSE
(mean square error, Mean Squared Error).The two criterions are all the calculated distortions in the way of pixel is to pixel,
And the distortion of all pixels point is subjected to cumulative mean.Which is not suitable for virtual view video evaluation, the reason is that virtual view
The constant geometric distortion in timesharing domain is not easy to be detected by human eye in the middle part of video, because of the perceptual distortion without caused by, but can be tight
The mass fraction of PSNR and MSE is reduced again.Meanwhile virtual view video is generated by the video of the reference view of left and right two, and it is left
Right two different points of view videos contain camera sensor random noise in itself.These noises can't cause human eye sensory perceptual system
Note that but can cause PSNR and MSE scores decline.
The depth information that virtual view video is provided according to depth map by the texture image of left view point and right viewpoint draw and
Into so the distortion sources of virtual view video contain the distortion of depth map and the distortion of texture image.And current existing void
Intend viewpoint video quality evaluating method and often calculate not comprehensive enough, some only considered the influence that depth map distortion is brought, and have
It then only considered the distortion that drawing process is brought in itself.
The content of the invention
Based on this, it is necessary to provide a kind of accurate, comprehensive virtual view method for evaluating video quality.
A kind of virtual view method for evaluating video quality, comprises the following steps:
Original reference video and video to be evaluated are subjected to Space-time domain dividing elements respectively;
The temporal flicker distortion and Space-time domain texture distortion of each Space-time domain unit are calculated respectively;
According to the temporal flicker feature calculation first kind distortion of original reference video and video to be evaluated, according to original reference
The Space-time domain textural characteristics of video and video to be evaluated calculate the second class distortion;The first kind distortion and second class are lost
True integrate obtains the quality that total distortion judges video to be evaluated.
In one of the embodiments, it is described that original reference video and video to be evaluated are subjected to Space-time domain unit stroke respectively
The step of dividing includes:
Original reference video and video to be evaluated are divided into respectively some by continuous some frames are formed in time domain figure
As group;
Each image in image sets is divided into some image blocks, continuously some image blocks form Space-time domain lists to time domain
Member.
In one of the embodiments, the Space-time domain unit is by the identical some time domain consecutive image block groups in spatial domain position
Into;Or
It is made of the different some time domain consecutive image blocks in the spatial domain position of the same movement locus of object of description.
In one of the embodiments, the step of temporal flicker distortion of each Space-time domain unit of calculating includes:
First time domain gradient is calculated to each pixel of original reference video;
Second time domain gradient is calculated to each pixel of video to be evaluated;
Judge that each pixel whether there is temporal flicker distortion according to the first time domain gradient and the second time domain gradient, if
It is then to calculate temporal flicker strength of distortion.
In one of the embodiments, the step of temporal flicker distortion of each Space-time domain unit of calculating further includes:
Temporal flicker distortion is calculated according to pixel time domain gradient information, wherein, the temporal flicker distortion and the side of pixel time domain gradient
It is directly proportional to change frequency and amplitude of variation.
In one of the embodiments, the step of temporal flicker distortion of each Space-time domain unit of calculating further includes:
Calculate the corresponding temporal flicker distortion of cluster time domain contiguous pixels in video space time domain unit to be evaluated;
Detected according to first function in cluster time domain contiguous pixels and whether there is temporal flicker distortion;
If so, temporal flicker strength of distortion in cluster time domain adjacent pixel is then detected according to second function.
In one of the embodiments, the step of Space-time domain texture distortion of each Space-time domain unit of calculating includes:
Calculate the horizontal direction gradient and vertical gradient of each pixel in Space-time domain unit;
The sky of each pixel in Space-time domain unit is calculated according to the horizontal direction gradient and the vertical gradient
Domain gradient;
To be evaluated regard is calculated according to the spatial domain gradient disparities of original reference video and the Space-time domain unit of video to be evaluated
The Space-time domain texture distortion of frequency.
In one of the embodiments, step is further included:Pixel characteristic statistical information in Space-time domain unit is calculated, wherein,
The pixel characteristic statistical information includes average, variance and the standard deviation of the pixel gradient information in Space-time domain unit;The picture
Plain gradient information includes spatial domain horizontal direction gradient, spatial domain vertical gradient, spatial domain gradient, time domain gradient and the sky of pixel
Time domain gradient.
In one of the embodiments, step is further included:Minimum threshold of perception current is set to the pixel characteristic statistical information;
When pixel characteristic statistical value is less than minimum threshold of perception current, then using minimum threshold of perception current replacement pixels characteristic statistics
Value;
Then maintain pixel characteristic statistical information constant when pixel characteristic statistical value is more than minimum threshold of perception current.
In one of the embodiments, it is described according to temporal flicker feature calculation first kind distortion and according to Space-time domain texture
The step of feature calculation the second class distortion, includes:By a series of Space-time domains maximum in the temporal flicker distortion in Space-time domain unit
The distortion of unit is as first kind distortion;
Using a series of distortion of Space-time domain units maximum in the Space-time domain texture distortion in Space-time domain unit as second
Class distortion.
Above-mentioned virtual view method for evaluating video quality in units of all pixels in Space-time domain unit to count virtual view
The temporal flicker distortion of video, avoid pixel to caused by the temporal flicker distortion computation pattern of pixel to virtually regarding
The erroneous estimation of point video human eye perceptual distortion.When the temporal flicker distortion to virtual view video calculates, both considered
The distortion come due to depth map error tape is it is also contemplated that the distortion that left and right viewpoint texture image introduces.The above method can be effective
The temporal flicker distortion to make a big impact in assessment virtual view video to subjective quality, so that in evaluation virtual view video
More meet the result of human eye subjective perception during quality so that virtual view video quality evaluation is more accurate, comprehensive.
Brief description of the drawings
Fig. 1 is the flow chart of virtual view method for evaluating video quality;
Fig. 2 is first kind Space-time domain dividing elements schematic diagram;
Fig. 3 is the second class Space-time domain dividing elements schematic diagram;
Fig. 4 calculates Prototype drawing for pixel level spatial domain gradient;
Fig. 5 calculates Prototype drawing for pixel vertical airspace gradient;
Fig. 6 is temporal flicker distortion computation method flow diagram;
Fig. 7 is Space-time domain texture distortion computation method flow diagram.
Embodiment
For the ease of understanding the present invention, the present invention is described more fully below with reference to relevant drawings.In attached drawing
Give the preferred embodiment of the present invention.But the present invention can realize in many different forms, however it is not limited to herein
Described embodiment.On the contrary, the purpose for providing these embodiments is to make the understanding to the disclosure more saturating
It is thorough comprehensive.
It should be noted that when element is referred to as " being fixed on " another element, it can be directly on another element
Or there may also be element placed in the middle.When an element is considered as " connection " another element, it can be directly connected to
To another element or it may be simultaneously present centering elements.Term as used herein " vertical ", " horizontal ", " left side ",
" right side " and similar statement are for illustrative purposes only.
Unless otherwise defined, all of technologies and scientific terms used here by the article is with belonging to technical field of the invention
The normally understood implication of technical staff is identical.Term used in the description of the invention herein is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.Term as used herein " and/or " include one or more phases
The arbitrary and all combination of the Listed Items of pass.
The texture video and corresponding depth image of existing left view point and right viewpoint, separately there is the centre between the viewpoint of left and right
The video of viewpoint.The texture video and deep video of left and right viewpoint may all include distortion.Painted by the image based on depth map
Technology (Depth Image Based Rendering, DIBR) processed utilizes the texture video and respective depth image of left and right viewpoint
The virtual view video of intermediate-view can be generated.Compared to the original intermediate-view videos for reference, the virtual of generation regards
Point video includes various distortions.
Virtual view method for evaluating video quality is using original intermediate-view videos as reference, calculates virtual view video
Distortion, and the distortion value for calculate is consistent with the virtual view video distortion degree that human eye sensory perceptual system is perceived.
As shown in Figure 1, the flow chart for virtual view method for evaluating video quality.
Since the dividing mode of Space-time domain unit has two classes, below by according to the different Space-time domain dividing elements mode of two classes
Embodiment is provided respectively.
Embodiment 1:
A kind of virtual view method for evaluating video quality, comprises the following steps:
Step S110, original reference video and video to be evaluated are subjected to Space-time domain dividing elements respectively.
Step S110 includes:
Some it is made of 1. being respectively divided into original reference video and video to be evaluated continuous some frames in time domain
Image sets.
2. each image in image sets is divided into some image blocks, continuous image block forms Space-time domain list in time domain
Member.
In the present embodiment, the Space-time domain unit is made of the identical some time domain consecutive image blocks in spatial domain position.
Specifically, carrying out Space-time domain dividing elements firstly the need of by original reference video and video to be evaluated, its process is shown
It is intended to as shown in Figure 2.Video (original reference video and video to be evaluated) sequence is divided into some image sets, each image sets
It is made of continuous some frames in time domain.The selection gist of Group Of Pictures length is watched the cycle attentively to the time domain of video with human eye and is regarded
The frame per second of frequency is related.
Assuming that human eye watched the time domain of video attentively the cycle for T seconds, and the frame per second of video is one second N frame, at this time image sets
Length be N × T frames, i.e., adjacent N × T two field pictures are as an image sets in time domain.On dividing image sets, can select
There is overlapping region between sliding window pattern, i.e. image sets, can also select non-cross between image sets.
For an image sets, it is assumed that length is N × T frames.If it is respectively w and h that each image is divided into length and width
Some image blocks, then the continuous one group of image block of time domain may be constructed a Space-time domain unit in image sets.
As shown in Fig. 2, the constructive method of first kind Space-time domain unit is by the identical some time domain sequential charts in spatial domain position
As block composition Space-time domain unit.Each Space-time domain unit is the pixel cuboid that a length, width and height are respectively w, h, N × T.It is false
If used video length is 200 frames, frame per second is that 25 frames/second // resolution ratio is 1920x1080.Video sequence is divided into 40
A image sets, each image sets are made of continuous 5 frame in time domain.The selection gist of Group Of Pictures length and human eye to video when
The cycle is watched in domain attentively and the frame per second of video is related.Assuming that human eye watched the time domain of video attentively the cycle for 0.2 second, and the frame per second of video
For one second 25 frame, the length of image sets was 5 frames at this time, i.e., 5 adjacent two field pictures are as an image sets in time domain.
For an image sets, length is 5 frames, the length of image and it is wide be respectively 1920 and 1080, then in image sets
Pixel constitute pixel cuboid in the range of a Space-time domain, its length is respectively 1920,1080,5.If will be every
Width image is divided into the image block that length and width are respectively 16 and 16, and whole image group then has been partitioned into a series of Space-time domain units,
Each Space-time domain unit is the pixel cuboid that a length, width and height are respectively 16,16,5.Space-time domain unit is that statistics calculates void
Intend the base unit of viewpoint video distortion, the distortion that each Space-time domain unit is calculated obtains scheming where it eventually through integration
As the distortion of the distortion of group, and each image sets is finally integrated into the distortion of whole video sequence.
Step S120, according to the temporal flicker feature of original reference video and the temporal flicker feature calculation of video to be evaluated
First kind distortion.
In the present embodiment, virtual view method for evaluating video quality further includes step:
First time domain gradient is calculated to each pixel of original reference video.
Second time domain gradient is calculated to each pixel of video to be evaluated.
Judge that each pixel whether there is temporal flicker distortion according to the first time domain gradient and the second time domain gradient, if
It is then to calculate temporal flicker strength of distortion.
Specially:According to the first time domain gradient and the symbol difference and absolute value difference meter of the second time domain gradient
Calculate temporal flicker distortion.
Specifically, temporal flicker distortion is calculated respectively to each Space-time domain unit.Calculate the flow chart of temporal flicker distortion
As shown in Figure 6.Input the Space-time domain unit of original reference video and video to be evaluated (virtual view video) correspondence position.In sky
In time domain unit, each pixel of each pixel I of original reference video and video to be evaluated (virtual view video)Have
One three-dimensional coordinate (x, y, t).To I (x, y, t) andTime domain gradient is calculated respectivelyWith
For the cluster time domain adjacent pixel that the spatial domain coordinate in Space-time domain unit is (x, y), its corresponding temporal flicker distortion DFx,y
Calculated using formula 1.
Formula 1;
Wherein, DFx,yRepresent temporal flicker distortion of the spatial domain coordinate for the cluster time domain adjacent pixel of (x, y).T is image
The length of group and Space-time domain unit.
In the present embodiment, virtual view method for evaluating video quality further includes step:Calculate video Space-time domain to be evaluated
The corresponding temporal flicker distortion of cluster time domain adjacent pixel in unit.
Detected according to first function in cluster time domain adjacent pixel and whether there is temporal flicker distortion.
If so, temporal flicker strength of distortion in cluster time domain adjacent pixel is then detected according to second function.
Specifically, the pixel that first function Φ () is used to detect (x, y, t) position whether there is temporal flicker distortion.And
Second function Δ () is used for measuring the temporal flicker strength of distortion of the pixel of (x, y, t) position.
The formula 2 of first function Φ () is as follows:
Formula 2;
Wherein,Represent the point I (x, y, t) in original reference video and video to be evaluated
Point in (virtual view video)Time domain gradient direction on the contrary, meeting this condition then to think the point that there are time domain sudden strain of a muscle
Bright distortion.ρ represents visual perception threshold value, and the value of the value can use the human eye of the pixel just can perceptual distortion threshold value
(JND, Just Noticeable Difference).Traditional human eye based on pixel domain just can perceptual distortion threshold calculations model
In do not distinguish edge pixel and texture pixel.In order to allow the threshold value only to keep sensitive to edge distortion, reduce and texture is lost
Genuine susceptibility, can distinguish the edge and texture region of image, and just can perceptual distortion to the human eye of texture region
Threshold value carries out diminution processing.The edge graph of raw video image is extracted, then edge graph is divided into etc. to the image block of size, when one
Edge pixel point in a image block then thinks that the image block is texture block when exceeding a certain range.
In the present embodiment, edge detection is carried out to image first by Canny edge detection operators, extraction obtains edge
Figure.Edge graph is divided into the image block of 8x8.When an image edge pixel in the block is more than 48, it is believed that in the image block
The edge pixel detected should belong to texture pixel.Calculate human eye just can perceptual distortion threshold value when, if running into texture pixel,
Its human eye just can perceptual distortion threshold value to be multiplied by 0.1.
Second function Δ () is used for measuring the flicker strength of distortion of the pixel of (x, y, t) position, second function Δ ()
Formula 3 it is as follows:
Formula 3;
WhereinFor reflecting the size of time domain gradient distortion.Denominator divided byPurpose
It is to consider masking effect.Since second function the Φ () preconditions being not zero areWithDirection on the contrary,
So Δt(x, y) must be more than 1 and directly proportional to the intensity of temporal flicker distortion.If the temporal flicker of whole Space-time domain unit loses
True is DFcube, the implication of cube is the i.e. defined Space-time domain unit of cuboid of pixel composition.DFcubeCan be by formula 4
Calculate.
Formula 4;
Wherein w and h is the length and width of Space-time domain unit respectively.Dodged in the time domain for obtaining each Space-time domain unit in image sets
Bright distortion DFcubeAfterwards, can be by integrating DFcubeObtain the temporal flicker distortion DF of image setsGoP, the implication of GoP is Group
Of Pictures, i.e. image sets.DFGoPIt can be obtained by formula 5.
Formula 5;
Wherein, W represents the set of W% Space-time domain unit of temporal flicker distortion most serious in image sets.NWRepresent collection
Close the number of the Space-time domain unit in W.The purpose of this integration rules used in the present embodiment is that distortion is most in video image
Big part region determines quality evaluation of the human visual system to whole image.Among the present embodiment, the value of W%
For 1%.
The step of feature calculation first kind distortion according to temporal flicker, includes:By the temporal flicker in Space-time domain unit
A series of distortion of maximum Space-time domain units is as first kind distortion in distortion.
Finally, the temporal flicker distortion of whole video sequence can pass through the temporal flicker distortion DF of integral image groupGoPCome
Obtain, both can also use the calculating average value such as formula 6 according to worst integration criterion.
Formula 6;
Wherein, K is the number of image sets in the video sequence.In the present embodiment, K 40.Seq represents sequence
(Sequence)。
Step S130, according to the Space-time domain textural characteristics of original reference video and the Space-time domain textural characteristics of video to be evaluated
Calculate the second class distortion.
In the present embodiment, virtual view method for evaluating video quality further includes step:
Calculate the horizontal direction gradient and vertical gradient of each pixel in Space-time domain unit.
The sky of each pixel in Space-time domain unit is calculated according to the horizontal direction gradient and the vertical gradient
Domain gradient.
The Space-time domain line of video to be evaluated is calculated according to the spatial domain gradient disparities of original reference video and video to be evaluated
Manage distortion.
Specifically, in the Space-time domain texture distortion in calculating video (virtual view video) to be evaluated, still with sky when
Domain unit is basic distortion computation unit, and calculation process is as shown in Figure 7.Firstly the need of to each pixel in Space-time domain unit
Point I (x, y, t) calculates its spatial domain gradientSpatial domain gradientCalculating firstly the need of calculating its level side respectively
To with the gradient in vertical directionWithTemplate example such as Fig. 4 and figure of calculated level and vertical gradient
5.Obtain that spatial domain gradient is calculated by formula 7 after horizontal and vertical gradient
Formula 7;
It is being calculated in Space-time domain unit after the spatial domain gradient of each pixel, its averageAnd standard deviation sigmacube
It can be calculated respectively by formula 8 and formula 9.
Formula 8;
Formula 9;
Wherein, w, h and l are the length, width and height of Space-time domain unit respectively.Since the flat site pixel in Space-time domain unit does not have
There is obvious spatial domain gradient, when the spatial domain gradient standard deviation for the Space-time domain unit that can cause to be calculated can not accurately reflect the sky
The texture features of domain unit.So needing to set a threshold of perception current ε, work as σcubeDuring less than ε, by σcubeIt is arranged to ε.
In this embodiment, the value of threshold of perception current ε with gradient calculation template on the occasion of the sum of η it is in a linear relationship:ε=
α×η.Such as in the gradient calculation template in Fig. 4 on the occasion of the sum of η be 32, in the present embodiment, linear coefficient α values are
5.62, so ε values are 179.84.
The spatial domain of original reference video and video to be evaluated (virtual view video) Space-time domain unit is being calculated respectively
Gradient standard deviationAnd σcubeAfterwards, difference between the two can be compared in several ways to measure video to be evaluated
The Space-time domain texture distortion DA of (virtual view video)cube, such as formula 10.
Formula 10;
The texture distortion DA of all Space-time domain units in image sets is having been calculatedcubeAfterwards, can be integrated to obtain figure
As the texture distortion DA of groupGoP, the criterion of integration can be that worst region as shown in formula 11 determines overall human eye sense
Know quality:
Formula 11;
Wherein, Z represents Z% DA worst in image setscubeThe set of composition.
Described the step of calculating the second class distortion according to Space-time domain textural characteristics, includes:
Using a series of distortion of Space-time domain units maximum in the Space-time domain texture distortion in Space-time domain unit as second
Class distortion.
Finally, the Space-time domain texture distortion of whole video (virtual view video) sequence to be evaluated can be by each figure
Obtain, can also be obtained by other integration criterions as the Space-time domain texture distortion averaging of group.Such as formula 12.
Formula 12;
Step S140, first kind distortion and the second class distortion integration acquisition total distortion are judged into video to be evaluated according to described
Quality.
After the temporal flicker distortion and Space-time domain texture distortion of video to be evaluated (virtual view video) is calculated,
It can be integrated respectively to obtain total distortion.The rule of integration is not limited, such as can be integrated to obtain by formula 13
Total distortion D.
D=DA × log10(1+DF) formula 13;
First kind distortion and the second class distortion are integrated using aforesaid way and obtain total distortion.
Embodiment 2:
Step S110, original reference video and video to be evaluated are subjected to Space-time domain dividing elements respectively.
Step S110 includes:
Some it is made of 1. being respectively divided into original reference video and video to be evaluated continuous some frames in time domain
Image sets.
2. each image in image sets is divided into some image blocks, continuous image block forms Space-time domain list in time domain
Member.
The Space-time domain unit is by the different some time domain consecutive images in the spatial domain position for describing same movement locus of object
Block forms.
Specifically, carrying out Space-time domain dividing elements firstly the need of by original reference video and video to be evaluated, its process is shown
It is intended to as shown in Figure 3.Video (original reference video and video to be evaluated) sequence is divided into some image sets, each image sets
It is made of continuous some frames in time domain.The selection gist of Group Of Pictures length is watched the cycle attentively to the time domain of video with human eye and is regarded
The frame per second of frequency is related.
Assuming that human eye watched the time domain of video attentively the cycle for T seconds, and the frame per second of video is one second N frame, at this time image sets
Length be N × T frames, i.e., adjacent N × T two field pictures are as an image sets in time domain.On dividing image sets, can select
There is overlapping region between sliding window pattern, i.e. image sets, can also select non-cross between image sets.
For an image sets, it is assumed that length is N × T frames.If it is respectively w and h that each image is divided into length and width
Some image blocks, then the continuous one group of image block of time domain may be constructed a Space-time domain unit in image sets.
As shown in figure 3, the constructive method of the second class Space-time domain unit is by empty along the difference of same movement locus of object
Some time domain consecutive image blocks composition Space-time domain unit of domain position.Each Space-time domain unit contains N × T length and width difference
For the block of pixels of w, h.Space-time domain unit is to count the base unit for calculating virtual view video distortion, each Space-time domain unit meter
Obtained distortion obtains the distortion of image sets where it eventually through integrating, and the distortion of each image sets be finally integrated into it is whole
The distortion of a video sequence.
What the second class Space-time domain unit was made of the image block along same movement locus.Obtain the movement of image block
Track, can use block-based motion estimation algorithm, for example, full search, three-wave mixing etc..In addition, due to video sequence
Arrange there may be the global motion of camera motion generation, it is necessary to estimate that global motion vector is modified movement locus.
The movement of camera includes rotation, Pan and Zoom etc., these can be described by affine Transform Model,
Such as:
Formula 14;
Wherein, (x, y) represents the pixel coordinate position in present image, and (x ', y ') is represented in estimation target image
Location of pixels.Vector theta=[θ1,...,θ6] represent affine Transform Model parameter.Utilizing (x ', y ') to subtract (x, y) can be with
Obtain the global motion vector between two images.The image of motion tracking is carried out before being detected according to global motion vector
Whether block has moved out picture, so as to decide whether to bring the image block into the Space-time domain unit formed along movement locus into
In.
Step S120, according to the temporal flicker feature of original reference video and the temporal flicker feature calculation of video to be evaluated
First kind distortion.
In the present embodiment, virtual view method for evaluating video quality further includes step:
First time domain gradient is calculated to each pixel of original reference video.
Second time domain gradient is calculated to each pixel of video to be evaluated.
Judge that each pixel whether there is temporal flicker distortion according to the first time domain gradient and the second time domain gradient, if
It is then to calculate temporal flicker strength of distortion.
Specially:According to the first time domain gradient and the symbol difference and absolute value difference meter of the second time domain gradient
Calculate temporal flicker distortion.
Specifically, temporal flicker distortion is calculated respectively to each Space-time domain unit.Calculate the flow chart of temporal flicker distortion
As shown in Figure 6.Input the Space-time domain unit of original reference video and video to be evaluated (virtual view video) correspondence position.
Assuming that the one of pixel coordinate position of the i-th frame of intermediate frame such as image sets in Fig. 6 is (xi,yi).In the pixel institute
Space-time domain unit in the pixel coordinate that corresponds on movement locus of the pixel be [(xi-N, yi-N) ..., (xi, yi) ...,
(xi+N, yi+N)].It is (x for the i-th frame pixel coordinate positioni,yi) place temporal flicker distortion DFxi,yiCalculated using formula 15.
Formula 15;
Wherein, DFxi,yiExpression spatial domain coordinate is (xi,yi) pixel where cluster time domain adjacent pixel on movement locus
Temporal flicker distortion.2N+1 is image sets and the length of Space-time domain unit.
In the present embodiment, virtual view method for evaluating video quality further includes step:Calculate video Space-time domain to be evaluated
The corresponding temporal flicker distortion of cluster time domain adjacent pixel in unit.
Detected according to first function in cluster time domain adjacent pixel and whether there is temporal flicker distortion.
If so, temporal flicker strength of distortion in cluster time domain adjacent pixel is then detected according to second function.
Specifically, the pixel that first function Φ () is used to detect (x, y, n) position whether there is temporal flicker distortion.The
Two function Δs () are used for measuring the temporal flicker strength of distortion of the pixel of (x, y, n) position.
The formula 16 of first function Φ () is as follows:
Formula 16;
Wherein,Represent the point I (x, y, n) in original reference video and video to be evaluated
Point in (virtual view video)Time domain gradient direction on the contrary, meeting this condition then to think the point that there are time domain sudden strain of a muscle
Bright distortion.ρ represents visual perception threshold value, and the value of the value refers to embodiment 1.
Second function Δ () is used for measuring the flicker strength of distortion of the pixel of (x, y, n) position, second function Δ ()
Formula 17 it is as follows:
Formula 17;
WhereinFor reflecting the size of time domain gradient distortion.Denominator divided byPurpose
It is to consider masking effect.Since second function the Φ () preconditions being not zero areWithDirection on the contrary,
So Δt(x, y, n) must be more than 1 and directly proportional to the intensity of temporal flicker distortion.If the temporal flicker of whole Space-time domain unit
Distortion is DFtube, the implication of tube is the pipe-like Space-time domain unit of pixel composition.DFtubeIt can be calculated by formula 18.
Formula 18;
Wherein w and h is the length and width of Space-time domain unit respectively.Dodged in the time domain for obtaining each Space-time domain unit in image sets
Bright distortion DFtubeAfterwards, can be by integrating DFtubeObtain the temporal flicker distortion DF of image setsGoP, the implication of GoP is Group
Of Pictures, i.e. image sets.DFGoPIt can be obtained by formula 19.
Formula 19;
Wherein, W represents the set of W% Space-time domain unit of temporal flicker distortion most serious in image sets.NWRepresent collection
Close the number of the Space-time domain unit in W.The purpose of this integration rules used in the present embodiment is that distortion is most in video image
Big part region determines quality evaluation of the human visual system to whole image.Among the present embodiment, the value of W%
For 1%.
The step of feature calculation first kind distortion according to temporal flicker, includes:
Using a series of distortion of Space-time domain units maximum in the temporal flicker distortion in Space-time domain unit as the first kind
Distortion.
Finally, the temporal flicker distortion of whole video sequence can pass through the temporal flicker distortion DF of integral image groupGoPCome
Obtain, both can also use the calculating average value such as formula 20 according to worst integration criterion.
Formula 20;
Wherein, K is the number of image sets in the video sequence.In the present embodiment, K 40.Seq represents sequence
(Sequence)。
Step S130, according to the Space-time domain textural characteristics of original reference video and the Space-time domain textural characteristics of video to be evaluated
Calculate the second class distortion.
In the present embodiment, virtual view method for evaluating video quality further includes step:
Calculate the horizontal direction gradient and vertical gradient of each pixel in Space-time domain unit.
The sky of each pixel in Space-time domain unit is calculated according to the horizontal direction gradient and the vertical gradient
Domain gradient.
The Space-time domain line of video to be evaluated is calculated according to the spatial domain gradient disparities of original reference video and video to be evaluated
Manage distortion.
Specifically, in the Space-time domain texture distortion in calculating video (virtual view video) to be evaluated, still with sky when
Domain unit is basic distortion computation unit, and calculation process is as shown in Figure 7.Firstly the need of to the n-th frame in Space-time domain unit
Pixel I (x, y) calculates its spatial domain gradientSpatial domain gradientCalculating it is horizontal firstly the need of its is calculated respectively
Gradient on direction and vertical directionWithCalculated level and template example such as Fig. 4 of vertical gradient and
Fig. 5.Obtain that spatial domain gradient is calculated by formula 21 after horizontal and vertical gradient
Formula 21;
Such as Fig. 3, it is assumed that the top left corner pixel coordinate of a Space-time domain unit in the i-th frame of intermediate frame of image sets is (xi,
yi).The pixel coordinate position that the pixel is corresponded in Space-time domain unit on movement locus is followed successively by [(xi-N,yi-N),...,(xi,
yi),...,(xi+N,yi+N)].It is being calculated in Space-time domain unit after the spatial domain gradient of each pixel, its average
And standard deviation sigmatubeIt can be calculated respectively by formula 22 and formula 23.
Formula 22;
Formula 23;
Wherein, w, h are the length and width of image block in Space-time domain unit respectively.2N+1 is image sets and the length of Space-time domain unit
Degree.Since the flat site pixel in Space-time domain unit does not have obvious spatial domain gradient, the Space-time domain list being calculated can be caused
The spatial domain gradient standard deviation of member can not accurately reflect the texture features of the Space-time domain unit.So need to set a threshold of perception current
ε, works as σtubeDuring less than ε, by σtubeIt is arranged to ε.The value reference implementation example 1 of ε.
The spatial domain of original reference video and video to be evaluated (virtual view video) Space-time domain unit is being calculated respectively
Gradient standard deviationAnd σtubeAfterwards, difference between the two can be compared in several ways to measure video to be evaluated
The Space-time domain texture distortion DA of (virtual view video)tube, such as formula 24.
Formula 24;
The texture distortion DA of all Space-time domain units in image sets is having been calculatedtubeAfterwards, can be integrated to obtain figure
As the texture distortion DA of groupGoP, the criterion of integration can be that the worst region as shown in formula 25 determines overall human eye sense
Know quality:
Formula 25;
Wherein, Z represents Z% DA worst in image setstubeThe set of composition.
Described the step of calculating the second class distortion according to Space-time domain textural characteristics, includes:By the Space-time domain in Space-time domain unit
A series of distortion of maximum Space-time domain units is as the second class distortion in texture distortion.
Finally, the Space-time domain texture distortion of whole video (virtual view video) sequence to be evaluated can be by each figure
Obtain, can also be obtained by other integration criterions as the Space-time domain texture distortion averaging of group.Such as formula 26.
Formula 26;
Step S140, first kind distortion and the second class distortion integration acquisition total distortion are judged into video to be evaluated according to described
Quality.
After the temporal flicker distortion and Space-time domain texture distortion of video to be evaluated (virtual view video) is calculated,
It can be integrated respectively to obtain total distortion.The rule of integration is not limited, such as can be integrated to obtain by formula 27
Total distortion D.
D=DA × log10(1+DF) formula 27;
First kind distortion and the second class distortion are integrated using aforesaid way and obtain total distortion.
Based on above-mentioned all embodiments, virtual view method for evaluating video quality further includes step:According to pixel time domain ladder
Spend information and calculate temporal flicker distortion, wherein, the direction change frequency of the temporal flicker distortion and pixel time domain gradient and
Amplitude of variation is directly proportional.
Based on above-mentioned all embodiments, virtual view method for evaluating video quality further includes step:Calculate Space-time domain unit
Interior Space-time domain pixel characteristic statistical information, wherein, the Space-time domain pixel characteristic statistical information includes the picture in Space-time domain unit
Average, variance and the standard deviation of plain gradient information;The pixel gradient information includes the spatial domain horizontal direction gradient of pixel, spatial domain
Vertical gradient, spatial domain gradient, time domain gradient and Space-time domain gradient.
Virtual view method for evaluating video quality further includes step:Minimum perceive is set to the pixel characteristic statistical information
Threshold value.
When pixel characteristic statistical value is less than minimum threshold of perception current, then using minimum threshold of perception current replacement pixels characteristic statistics
Value.
Then maintain pixel characteristic statistical information constant when pixel characteristic statistical value is more than minimum threshold of perception current.
Based on above-mentioned all embodiments, when carrying out distortion computation to virtual view video, with a Space-time domain unit
All pixels be unit.Original reference video and video to be evaluated (virtual view video) are divided into image sets first, often
A image sets by continuous some group of picture in time domain into.Each image sets are further divided into several Space-time domain lists
Member.A kind of Space-time domain unit is made of the identical some image blocks in spatial domain position.Another Space-time domain unit is by along same fortune
Some image blocks composition of the different spatial domain positions of dynamic rail mark.
Two class distortions are counted in Space-time domain unit.One kind is the temporal flicker distortion of virtual view video, another kind of to be
The Space-time domain texture distortion that left and right viewpoint texture image is brought.The Main Basiss for calculating the temporal flicker distortion of video are that pixel is bright
The time domain graded of angle value.When video to be evaluated (virtual view video) pixel brightness value time domain gradient direction with it is original
The pixel is then thought when the time domain gradient direction of respective pixel is opposite in reference video there are temporal flicker distortion, and distortion is big
It is small directly proportional to the error size of time domain gradient.
On the other hand, virtually regarded with the change of the standard deviation (Standard Deviation) of pixel spatial domain gradient to weigh
The Space-time domain texture distortion that the texture image of point video is brought.Finally, the temporal flicker each Space-time domain unit counted
Distortion and Space-time domain texture distortion are integrated to obtain the distortion of image sets, when the principle of integration is quality worst a part of sky
The distortion of domain unit determines perceptual distortion of the human eye sensory perceptual system to whole image group.The distortion of image sets is eventually through integration
Obtain the distortion of whole video sequence.Use MPEG (Motion Picture Experts Group, Moving Picture Experts Group)
The 3D video sequences of offer build virtual view sets of video data, and tissue beholder carries out subjective quality marking, uses above-mentioned side
The virtual view video that the quality evaluating method that method proposes concentrates data carries out distortion computation.Acquired results and subjective marking
Spearman order related coefficient (Spearman Rankorder Correlation Coefficient, SROCC) reaches
0.867, Pearson's linearly dependent coefficient (Pearson Linear Correlation Coefficient, PLCC) reaches
0.785, obviously higher than the prior art.
The above method can be used for the applications such as video coding algorithm optimization, the generation of 3D video contents and post processing.
Elementary cell of the above-mentioned virtual view method for evaluating video quality using Space-time domain unit as distortion computation, so as to keep away
Exempted from according to pixel to caused by the mode computation distortion of pixel to the overestimated consequence of virtual view video distortion.
The human eyes such as constant displacement distortion or camera random noise in virtual view video are hardly perceivable, but can cause to make
Score is too low when being evaluated with the evaluating criterion of quality based on pixel.Temporal flicker distortion and Space-time domain texture substantially calculate
Based on the characteristic statistics information of all pixels inside Space-time domain unit, the influence of constant displacement distortion is not easily susceptible to, and can drop
The influence that the factors such as low camera random noise are brought.Therefore using the direction change of the time domain gradient of pixel and changes in amplitude as retouching
The validity feature of temporal flicker distortion is stated, can reflect the scintillation in virtual view video exactly.
The simulation experiment result evaluated using virtual view method for evaluating video quality video to be evaluated is as follows:
It is the data set for establishing a virtual view video first, employs Motion Picture Experts Group MPEG is provided 10
A video sequence.For each video sequence, a pair of of left and right viewpoint video is selected, respectively to the left and right viewpoint video texture image
And depth image is compressed coding, it is set to produce compression artefacts.Each sequence generates 14 void with different degrees of distortion
Intend viewpoint video and constitute the data set with 10 original reference videos and 140 virtual view videos altogether.The of experiment
Two steps need the video among tissue beholder viewing data set and carry out subjective quality marking.This experiment is total to 56 sights of tissue
The person of seeing carries out subjective quality marking.
Ensuing experiment is that the distortion video concentrated using virtual view method for evaluating video quality to data calculates matter
Measure point, then by the video quality fraction being calculated compared with the average mark of the subjective marking of 56 beholders, mainly
Investigate Spearman order related coefficient (SROCC) and Pearson's linearly dependent coefficient (PLCC).Related coefficient is more high, illustrates
The video quality fraction calculated is more consistent with the subjective quality score that human eye sensory perceptual system is evaluated.
Technical scheme 1 uses first kind Space-time domain cellular construction, when technical scheme 2 is using the second class sky
Domain cellular construction.Test result indicates that for the video of whole data set, the final result SROCC of technical scheme 1 is
0.845, PLCC 0.773, the final result SROCC of technical scheme 2 is 0.867, PLCC 0.785.For data
The virtual view video for only existing depth distortion is concentrated, the final result SROCC of technical scheme 1 is for 0.790, PLCC
0.763, the final result SROCC of technical scheme 2 is 0.810, PLCC 0.801.For only existing line in data set
Manage the virtual view video of distortion, the final result SROCC of technical scheme 1 is 0.785, PLCC 0.667, the application
The final result SROCC of technical solution 2 is 0.828, PLCC 0.673.For there are texture distortion and depth at the same time in data set
Spend the virtual view video of distortion, the final result SROCC of technical scheme 1 is 0.854, PLCC 0.808, the application
The final result SROCC of technical solution 2 is 0.868, PLCC 0.815.In order to current existing state-of-the-art technology scheme into
Row contrast, distortion computation and quality evaluation are carried out selected from following technical scheme to same virtual view sets of video data:
(structural similarity measures, Structural by PSNR (Y-PSNR, Peak Signal to Noise Ratio), SSIM
Similarity Index Measurement), VQM (video quality model, Video Quality Model), MOVIE (bases
In the video distortion interpretational criteria of movement, Motion-based Video Integrity Evaluation index).
Contrast on effect between virtual view method for evaluating video quality and at present existing mainstream technology see the table below:
From above table as can be seen that virtual view method for evaluating video quality can obtain and human eye sensory perceptual system subjectivity
Quality score it is consistent as a result, being better than existing technical solution.
Based on above-mentioned all embodiments, size, shape and the scope of Space-time domain unit are simultaneously not fixed.Time domain, spatial domain or
Adjacent portion pixel can form a Space-time domain unit among person time-space domain, should all belong to the model that the present invention protects
Enclose.
The method of estimation is simultaneously not fixed, and can be block-based estimation or the movement based on pixel
Estimation.Global motion model is also not fixed, and can be the translation model of two parameters, the geometrical model of four parameters, six parameters
Affine model and the perspective model of 8 parameters etc..
Temporal flicker distortion is turned to weigh the foundation of time domain distortion with the change of the temporal signatures of pixel.Temporal signatures are not
It is limited to average brightness, the variance of brightness, point of brightness of the pixel of time domain gradient or same position within a period of time
The various features such as cloth.
The computational methods of pixel time domain gradient are also not fixed in virtual view method for evaluating video quality, before and after all calculating
The method that single pixel or local pixel between two frames or front and rear multiframe change over time situation belongs to pixel time domain gradient
Calculate, should all belong to the scope of the present invention.
The feature of pixel in Space-time domain texture distortion statistical Space-time domain unit in virtual view method for evaluating video quality
And counting statistics information.Wherein, the feature of pixel is not limited to pixel gradient.When edge detecting information, the gradient using pixel
Directional information, brightness value, color value, value of chromatism should all belong to the scope of the present invention when further feature.
The feature of pixel in Space-time domain texture distortion statistical Space-time domain unit in virtual view method for evaluating video quality
And counting statistics information.Wherein, statistical information is not limited to variance or standard deviation information, when use average, probability distribution coefficient
The scope of the present invention should all be belonged to when statistical information.
The calculation template of pixel level gradient and vertical gradient not office is calculated in virtual view method for evaluating video quality
It is limited to Fig. 4 and Fig. 5, the template of all calculating pixel level gradients and vertical gradient is suitable for the picture in technical solution of the present invention
The horizontal and vertical gradient of element calculates.
Above-mentioned virtual view method for evaluating video quality in units of all pixels in Space-time domain unit to count virtual view
The temporal flicker distortion of video, avoid pixel to caused by the temporal flicker distortion computation pattern of pixel to virtually regarding
The erroneous estimation of point video human eye perceptual distortion.When the temporal flicker distortion to virtual view video calculates, both considered
The distortion come due to depth map error tape is it is also contemplated that the distortion that left and right viewpoint texture image introduces.The above method can be effective
The temporal flicker distortion to make a big impact in assessment virtual view video to subjective quality, so that in evaluation virtual view video
More meet the result of human eye subjective perception during quality so that virtually so that video quality evaluation is more accurate, comprehensive.
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality
Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited
In contradiction, the scope that this specification is recorded all is considered to be.
Embodiment described above only expresses the several embodiments of the present invention, its description is more specific and detailed, but simultaneously
Cannot therefore it be construed as limiting the scope of the patent.It should be pointed out that come for those of ordinary skill in the art
Say, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the protection of the present invention
Scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (10)
1. a kind of virtual view method for evaluating video quality, comprises the following steps:
Original reference video and video to be evaluated are subjected to Space-time domain dividing elements respectively;
According to the first kind distortion of each Space-time domain unit of the temporal flicker feature calculation of original reference video and video to be evaluated;
The second class that each Space-time domain unit is calculated according to the Space-time domain textural characteristics of original reference video and video to be evaluated is lost
Very;
The first kind distortion and the second class distortion are integrated to the quality for obtaining total distortion and judging video to be evaluated.
2. virtual view method for evaluating video quality according to claim 1, it is characterised in that described to regard original reference
The step of frequency and video to be evaluated carry out Space-time domain dividing elements respectively includes:
Original reference video and video to be evaluated are divided into respectively some by continuous some frames are formed in time domain image sets;
Each image in image sets is divided into some image blocks, continuously some image blocks form Space-time domain units to time domain.
3. virtual view method for evaluating video quality according to claim 2, it is characterised in that the Space-time domain unit by
The identical some time domain consecutive image blocks composition in spatial domain position;Or
It is made of the different some time domain consecutive image blocks in the spatial domain position of the same movement locus of object of description.
4. virtual view method for evaluating video quality according to claim 1, it is characterised in that described according to original reference
The step of first kind distortion of each Space-time domain unit of the temporal flicker feature calculation of video and video to be evaluated, includes:
First time domain gradient is calculated to each pixel of original reference video;
Second time domain gradient is calculated to each pixel of video to be evaluated;
Judge that each pixel whether there is temporal flicker distortion according to the first time domain gradient and the second time domain gradient, if so, then
Calculate temporal flicker strength of distortion.
5. virtual view method for evaluating video quality according to claim 1, it is characterised in that described according to original reference
The step of first kind distortion of each Space-time domain unit of the temporal flicker feature calculation of video and video to be evaluated, further includes:According to
Pixel time domain gradient information calculates temporal flicker distortion, wherein, the temporal flicker distortion and the direction of pixel time domain gradient become
Change frequency and amplitude of variation is directly proportional.
6. virtual view method for evaluating video quality according to claim 1, it is characterised in that described according to original reference
The step of first kind distortion of each Space-time domain unit of the temporal flicker feature calculation of video and video to be evaluated, further includes:Calculate
The corresponding temporal flicker distortion of cluster time domain contiguous pixels in video space time domain unit to be evaluated;
Detected according to first function in cluster time domain contiguous pixels and whether there is temporal flicker distortion;
If so, temporal flicker strength of distortion in cluster time domain adjacent pixel is then detected according to second function.
7. virtual view method for evaluating video quality according to claim 1, it is characterised in that described according to original reference
The step of second class distortion of the Space-time domain textural characteristics of video and video to be evaluated calculating each Space-time domain unit, includes:
Calculate the horizontal direction gradient and vertical gradient of each pixel in Space-time domain unit;
The spatial domain ladder of each pixel in Space-time domain unit is calculated according to the horizontal direction gradient and the vertical gradient
Degree;
Video to be evaluated is calculated according to the spatial domain gradient disparities of original reference video and the Space-time domain unit of video to be evaluated
Space-time domain texture distortion.
8. virtual view method for evaluating video quality according to claim 1, it is characterised in that further include step:Calculate
Pixel characteristic statistical information in Space-time domain unit, wherein, the pixel characteristic statistical information includes the pixel in Space-time domain unit
Average, variance and the standard deviation of gradient information;The pixel gradient information includes the spatial domain horizontal direction gradient of pixel, spatial domain is hung down
Straight direction gradient, spatial domain gradient, time domain gradient and Space-time domain gradient.
9. virtual view method for evaluating video quality according to claim 8, it is characterised in that further include step:To institute
State the minimum threshold of perception current of pixel characteristic statistical information setting;
When pixel characteristic statistical value is less than minimum threshold of perception current, then using minimum threshold of perception current replacement pixels characteristic statistics value;
Then maintain pixel characteristic statistical information constant when pixel characteristic statistical value is more than minimum threshold of perception current.
10. virtual view method for evaluating video quality according to claim 1, it is characterised in that described to be dodged according to time domain
Sparkle the distortion of the feature calculation first kind and include the step of calculating the second class distortion according to Space-time domain textural characteristics:By Space-time domain unit
In temporal flicker distortion in a series of maximum Space-time domain units distortion as first kind distortion;
A series of distortion of Space-time domain units maximum in Space-time domain texture distortion in Space-time domain unit is lost as the second class
Very.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510395100.3A CN106341677B (en) | 2015-07-07 | 2015-07-07 | Virtual view method for evaluating video quality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510395100.3A CN106341677B (en) | 2015-07-07 | 2015-07-07 | Virtual view method for evaluating video quality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106341677A CN106341677A (en) | 2017-01-18 |
CN106341677B true CN106341677B (en) | 2018-04-20 |
Family
ID=57826441
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510395100.3A Active CN106341677B (en) | 2015-07-07 | 2015-07-07 | Virtual view method for evaluating video quality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106341677B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108600745A (en) * | 2018-08-06 | 2018-09-28 | 北京理工大学 | A kind of method for evaluating video quality based on time-space domain slice multichannel chromatogram configuration |
CN110636282A (en) * | 2019-09-24 | 2019-12-31 | 宁波大学 | No-reference asymmetric virtual viewpoint three-dimensional video quality evaluation method |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106973281B (en) * | 2017-01-19 | 2018-12-07 | 宁波大学 | A kind of virtual view video quality prediction technique |
CN107147906B (en) * | 2017-06-12 | 2019-04-02 | 中国矿业大学 | A kind of virtual perspective synthetic video quality without reference evaluation method |
CN108156451B (en) * | 2017-12-11 | 2019-09-13 | 江苏东大金智信息***有限公司 | A kind of 3-D image/video without reference mass appraisal procedure |
CN110401832B (en) * | 2019-07-19 | 2020-11-03 | 南京航空航天大学 | Panoramic video objective quality assessment method based on space-time pipeline modeling |
US11310475B2 (en) | 2019-08-05 | 2022-04-19 | City University Of Hong Kong | Video quality determination system and method |
CN113014918B (en) * | 2021-03-03 | 2022-09-02 | 重庆理工大学 | Virtual viewpoint image quality evaluation method based on skewness and structural features |
CN113793307A (en) * | 2021-08-23 | 2021-12-14 | 上海派影医疗科技有限公司 | Automatic labeling method and system suitable for multi-type pathological images |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101742355A (en) * | 2009-12-24 | 2010-06-16 | 厦门大学 | Method for partial reference evaluation of wireless videos based on space-time domain feature extraction |
CN103391450A (en) * | 2013-07-12 | 2013-11-13 | 福州大学 | Spatio-temporal union reference-free video quality detecting method |
CN104023227A (en) * | 2014-05-28 | 2014-09-03 | 宁波大学 | Objective video quality evaluation method based on space domain and time domain structural similarities |
CN104023225A (en) * | 2014-05-28 | 2014-09-03 | 北京邮电大学 | No-reference video quality evaluation method based on space-time domain natural scene statistics characteristics |
CN104023226A (en) * | 2014-05-28 | 2014-09-03 | 北京邮电大学 | HVS-based novel video quality evaluation method |
CN104243970A (en) * | 2013-11-14 | 2014-12-24 | 同济大学 | 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity |
CN104394403A (en) * | 2014-11-04 | 2015-03-04 | 宁波大学 | A compression-distortion-oriented stereoscopic video quality objective evaluating method |
CN104754322A (en) * | 2013-12-27 | 2015-07-01 | 华为技术有限公司 | Stereoscopic video comfort evaluation method and device |
-
2015
- 2015-07-07 CN CN201510395100.3A patent/CN106341677B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101742355A (en) * | 2009-12-24 | 2010-06-16 | 厦门大学 | Method for partial reference evaluation of wireless videos based on space-time domain feature extraction |
CN103391450A (en) * | 2013-07-12 | 2013-11-13 | 福州大学 | Spatio-temporal union reference-free video quality detecting method |
CN104243970A (en) * | 2013-11-14 | 2014-12-24 | 同济大学 | 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity |
CN104754322A (en) * | 2013-12-27 | 2015-07-01 | 华为技术有限公司 | Stereoscopic video comfort evaluation method and device |
CN104023227A (en) * | 2014-05-28 | 2014-09-03 | 宁波大学 | Objective video quality evaluation method based on space domain and time domain structural similarities |
CN104023225A (en) * | 2014-05-28 | 2014-09-03 | 北京邮电大学 | No-reference video quality evaluation method based on space-time domain natural scene statistics characteristics |
CN104023226A (en) * | 2014-05-28 | 2014-09-03 | 北京邮电大学 | HVS-based novel video quality evaluation method |
CN104394403A (en) * | 2014-11-04 | 2015-03-04 | 宁波大学 | A compression-distortion-oriented stereoscopic video quality objective evaluating method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108600745A (en) * | 2018-08-06 | 2018-09-28 | 北京理工大学 | A kind of method for evaluating video quality based on time-space domain slice multichannel chromatogram configuration |
CN108600745B (en) * | 2018-08-06 | 2020-02-18 | 北京理工大学 | Video quality evaluation method based on time-space domain slice multi-map configuration |
CN110636282A (en) * | 2019-09-24 | 2019-12-31 | 宁波大学 | No-reference asymmetric virtual viewpoint three-dimensional video quality evaluation method |
Also Published As
Publication number | Publication date |
---|---|
CN106341677A (en) | 2017-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106341677B (en) | Virtual view method for evaluating video quality | |
CN103763552B (en) | Stereoscopic image non-reference quality evaluation method based on visual perception characteristics | |
CN102523477B (en) | Stereoscopic video quality evaluation method based on binocular minimum discernible distortion model | |
CN104079925B (en) | Ultra high-definition video image quality method for objectively evaluating based on vision perception characteristic | |
CN103780895B (en) | A kind of three-dimensional video quality evaluation method | |
CN103152600A (en) | Three-dimensional video quality evaluation method | |
CN106303507B (en) | Video quality evaluation without reference method based on space-time united information | |
CN104811691B (en) | A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation | |
CN108109147A (en) | A kind of reference-free quality evaluation method of blurred picture | |
CN105338343A (en) | No-reference stereo image quality evaluation method based on binocular perception | |
CN101146226A (en) | A highly-clear video image quality evaluation method and device based on self-adapted ST area | |
CN104243973A (en) | Video perceived quality non-reference objective evaluation method based on areas of interest | |
CN109345502A (en) | A kind of stereo image quality evaluation method based on disparity map stereochemical structure information extraction | |
Ekmekcioglu et al. | Depth based perceptual quality assessment for synthesised camera viewpoints | |
Tsai et al. | Quality assessment of 3D synthesized views with depth map distortion | |
CN104754322A (en) | Stereoscopic video comfort evaluation method and device | |
CN106875389A (en) | Three-dimensional video quality evaluation method based on motion conspicuousness | |
Hong et al. | A spatio-temporal perceptual quality index measuring compression distortions of three-dimensional video | |
CN104574424B (en) | Based on the nothing reference image blur evaluation method of multiresolution DCT edge gradient statistics | |
Zhou et al. | No-reference quality assessment of DIBR-synthesized videos by measuring temporal flickering | |
Jin et al. | Validation of a new full reference metric for quality assessment of mobile 3DTV content | |
CN105430397B (en) | A kind of 3D rendering Quality of experience Forecasting Methodology and device | |
Devnani et al. | Comparative analysis of image quality measures | |
Mahmood et al. | Objective quality assessment of 3D stereoscopic video based on motion vectors and depth map features | |
CN102843576A (en) | Steganography analyzing method aiming at modem-sharing unit (MSU) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |