CN109523506A - The complete of view-based access control model specific image feature enhancing refers to objective evaluation method for quality of stereo images - Google Patents

The complete of view-based access control model specific image feature enhancing refers to objective evaluation method for quality of stereo images Download PDF

Info

Publication number
CN109523506A
CN109523506A CN201811109216.6A CN201811109216A CN109523506A CN 109523506 A CN109523506 A CN 109523506A CN 201811109216 A CN201811109216 A CN 201811109216A CN 109523506 A CN109523506 A CN 109523506A
Authority
CN
China
Prior art keywords
image
view
stereo
channel
right view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811109216.6A
Other languages
Chinese (zh)
Other versions
CN109523506B (en
Inventor
丁勇
孙光明
邓瑞喆
周博
周一博
孙阳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201811109216.6A priority Critical patent/CN109523506B/en
Publication of CN109523506A publication Critical patent/CN109523506A/en
Application granted granted Critical
Publication of CN109523506B publication Critical patent/CN109523506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Complete the invention discloses a kind of enhancing of view-based access control model specific image feature refers to objective evaluation method for quality of stereo images.First, stereo-picture is converted into YIQ color space from RGB color, luminance components are extracted from the channel Y, obtain disparity map and vision significance figure, then image co-registration is carried out to the left and right view in the channel Y and obtains intermediate image, and then edge/texture and depth information feature that vision significantly increases are extracted, similarity measurement is carried out, corresponding Measure Indexes are obtained.Secondly, extracting corresponding color information feature from the channel I and Q of stereo-picture, binocular fusion and similarity measurement are carried out, the Measure Indexes for the color information that vision significantly increases are obtained.All Measure Indexes are finally supported vector regression training prediction, obtain objective quality scores.Experiment shows that stereo image quality proposed by the invention is objectively evaluated with subjective assessment with good consistency, and performance is better than most of existing stereo image quality evaluation method.

Description

The complete of view-based access control model specific image feature enhancing is objectively evaluated with reference to stereo image quality Method
Technical field
The invention belongs to image processing techniques and computer vision field more particularly to a kind of view-based access control model specific image are special The complete of sign enhancing refers to objective evaluation method for quality of stereo images.
Background technique
Video and image always occur in image sampling, compression, transmission and reconstruction processes various distortions and are made The quality of video and image reduces.Nowadays the mankind are higher and higher to the quality requirement of video and image information, to video and image Transmission process carry out picture quality real-time monitoring requirement it is also more more and more urgent.And with the hair of three-dimensional video-frequency technology Exhibition, is measured in real time stereo image quality and also becomes a urgent problem to be solved.Stereo image quality assessment technique exists It is particularly important under this overall situation.It is each figure since subjective stereo image quality evaluation method requires human viewer As being offered to subjective mass fraction.The defects of these methods are time consuming nature and weak projectivity, can not be applied to actual end and arrives In the middle of the video detection system at end, so very it is necessary to develop objective effective stereo image quality evaluation method, with reality Quality that is now automatic, efficiently, objectively evaluating stereo-picture.One good stereo image quality evaluation method should have well Prediction stereo image quality ability, and with the result of subjective measurement have high consistency.
Objective stereo image quality method can be divided into three kinds of classifications according to the difference of object of reference: full reference, half refer to With no reference.Complete original reference image is needed to carry out quality as object of reference with reference to stereo image quality evaluation method entirely Evaluation, the object of reference of half reference image quality appraisement only has the Partial Feature information of original image, and non-reference picture quality is commented Valence algorithm needs not refer to object, only analyzes distorted image, obtains its mass fraction.If divided according to experimental considerations, Stereo image quality evaluation can be divided into three classes again: the first kind is that the algorithm of existing 2D is directly applied directly to 3D rendering matter In amount evaluation;Second class is to take into account the exclusive depth information of 3D, is first carried out respectively to the left and right view of stereo-picture special Sign is extracted, then carries out Fusion Features;Third class method is to carry out the fused image of binocular vision, then carry out feature extraction and matter Measure the acquisition of score.The first kind is most simple in these three types of algorithms, and effect is also worst, and third class algorithm framework is big due to meeting the mankind Binocular vision mechanism in brain and be widely studied, obtain pretty good result.But due to stereo-picture vision mode system It is still not perfect, therefore three-dimensional image objective quality evaluation is still the hot and difficult issue studied now.
Summary of the invention
The invention discloses a kind of visual saliency maps as that feature enhances is complete with reference to objective evaluation method for quality of stereo images. The purpose is to utilize vision significance model, the edge, texture and the color character that extract intermediate stereo-picture are assisted, to realize Measurement and evaluation to stereo image quality are completed in mapping to stereo image quality.
The technical solution adopted by the present invention is that:
Firstly, the left and right view of stereo pairs is converted into YIQ color space from RGB color, wherein logical to Y The gray scale stereo pairs (luminance components) that road obtains are handled to obtain corresponding disparity map, then respectively to gray scale perspective view As pair left and right view carry out image co-registration obtain middle reference and distorted image, utilize based on spectrum residual error vision significance mould Type obtains left and right view Saliency maps respectively, and integrates and obtain 3D mesopic vision Saliency maps, from middle reference and distorted image It extracts edge/texture information feature and extracts depth information feature from the disparity map of stereo pairs, carry out similarity measurements Amount, obtains the Measure Indexes for each visual information feature that vision significantly increases.Secondly because color information is also composition image Important information carries out double so corresponding color information feature is extracted in the channel I and Q (colour component) for stereo pairs Mesh fusion and similarity measurement, obtain the Measure Indexes of color information.Finally by the Measure Indexes of all features be supported to Regression training prediction is measured, objective quality scores are obtained, realizes the mapping to stereo image quality, is completed to stereo image quality Evaluation.
The technical solution adopted by the present invention to solve the technical problems is as follows:
Step (1) input is with reference to stereo pairs and distortion stereo pairs, wherein each stereo pairs respectively include Left view and right view image;
Step (2) converts the color space of the stereo pairs in step (1), is converted into from rgb color space YIQ color space, wherein the channel Y shows that the gray component of image, I and Q indicate the colour component of image.Specific conversion formula It is as follows:
Step (3) constructs Log Gabor filter model, to the gray scale perspective view obtained in step (2) by the channel Y As to convolution algorithm processing is carried out, respectively obtaining reference and being distorted the energy response figure of stereo image pair or so view;
Log Gabor filter hLGExpression formula it is as follows:
Wherein, f0And θ0Indicate centre frequency and the azimuth of Log Gabor filter, σθAnd σfRespectively represent filter Azimuth bandwidth and radial bandwidth, f and θ respectively represent radial coordinate and the azimuth of filter;
By Log Gabor filter and after referring to and being distorted stereo image pair or so view progress convolution, obtain corresponding Energy response figure F (x, y), expression formula is as follows:
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs gray component,For convolution fortune It calculates;
Step (4) extracts parallax with distortion stereo pairs to the gray reference stereo pairs that step (2) obtains respectively Scheme Dref(x, y) and Ddis(x, y) utilizes the left and right view for the gray reference stereo pairs that step (2) obtains residual based on composing The conspicuousness model of difference extracts left and right Saliency maps SL respectivelysr(x, y) and SRsr(x,y);
Step (5) constructs 3D vision significance figure S3D(x, y), specific expression formula are as follows:
S3D(x, y)=ω1×SLsr(x,y)+ω2×SRsr(x,y)+ω3×CB(x,y)+ω4×Dref(x,y) (5-2)
Wherein ω14For different weight factors,
CB (x, y) indicates central point offsetting mechanism.
The right view for the gray scale stereo pairs that step (6) obtains step (2) is according to the parallax obtained in step (4) The level that the parallax value of figure carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), Y), it is then based on the available left view of Log Gabor filter model described in step (3) and calibrates the energy sound of right view Ying Tu calculates normalized left view weight map WL(x, y) and calibration right view weight map WR((x+d), y), expression It is as follows:
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy sound of the left view that step (3) obtains and calibration right view Ying Tu, d are the disparity map D that step (4) are calculatedrefThe parallax value of respective coordinates in (x, y);
The left view and step (6) of gray reference and distortion stereo pairs that step (7) is based in step (2) obtain Reference and be distorted stereo pairs calibration right view and normalized left view weight map and calibration right view weight map, It is realized using binocular view Fusion Model to the image co-registration of stereo-picture, respectively obtains reference and distortion intermediate image; The formula of binocular view fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (7-1)
Wherein, CI (x, y) is the fused intermediate image of binocular view, IL(x, y) and IR((x+d), y) is respectively ash Spend the left view and calibration right view of stereo pairs;
Step (8) is referred to the middle gray that step (7) obtains and distorted image extracts edge and textural characteristics respectively;
The extraction of marginal information feature carries out process of convolution by Sobel operator and by altimetric image, obtains comprising edge contour The gradient map of information, the expression formula using the marginal information feature of Sobel operator extraction middle reference and distorted image are as follows:
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, GxAnd GyIt is 3 × 3 Sobel water Flat die plate and vertical formwork are respectively intended to the horizontal edge and vertical edge of detection image, and template expression formula is as follows:
The extraction of texture information feature.Using local binary patterns LBP, the expression formula of LBP is as follows:
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, from center The front-right of pixel rotates counterclockwise is followed successively by 0,1,2 ..., and P, x and y represent the coordinate value of central pixel point, and P indicates adjacent The number of pixel, sgn (x) are jump functions;
The visual information feature and step (5) for the middle reference and distorted image that step (9) extracts step (8) are established The multiplication put pixel-by-pixel of vision significance figure, obtain the visual information feature of vision significance enhancing, expression It is as follows:
GMSR(x, y)=GMR(x,y)*S3D(x,y)GMSD(x, y)=GMD(x,y)*S3D(x,y) (9-1)
TISR(x, y)=TIR(x,y)*S3D(x,y)TISD(x, y)=TID(x,y)*S3D(x,y) (9-2)
Wherein, GMR(x, y) and TIR(x, y) is the edge and texture feature information of middle reference image, GM respectivelyD(x,y) And TID(x, y) is the edge and texture feature information of intermediate distorted image respectively;S3DView after the integration obtained for step (5) Feel Saliency maps;
Step (10) carries out similarity measurement, expression to the visual information feature for the conspicuousness enhancing extracted in step (9) Formula is as follows:
Wherein, GMSR(x, y) and TISR(x, y) indicates the edge and texture information of the conspicuousness enhancing of middle reference image Feature, GMSD(x, y) and TISD(x, y) indicates the edge and texture information of the conspicuousness enhancing of intermediate distorted image, WithRespectively the edge of middle reference/distorted image conspicuousness enhancing and texture information feature is equal Value, M and N indicate the length and wide pixel number of image, Index1And Index2Respectively represent edge and texture information feature Similarity measurements figureofmerit;
It is special that step (11) extracts color information by the color stereo image pair that the channel I and Q obtains from step (2) Sign carries out similarity measurement to the left and right view of color stereo-picture respectively, obtains the color similarity figure under respective channel, table It is as follows up to formula:
Wherein, ILR(x, y) and QLR(x, y) indicates the color under the channel I and the channel Q with reference to stereo pairs left view Hum pattern, ILD(x, y) and QLD(x, y) indicates the color information figure under the channel I and the channel Q of distortion stereo pairs left view, SIL(x, y) and SQL(x, y) respectively indicates the color similarity figure under the channel I and the channel Q of stereo pairs left view.Right view The method that the color similarity figure of figure obtains is consistent with left view color similarity figure.T1And T2For constant, preventing denominator is zero.
The color similarity figure for the left and right view that step (12) obtains step (11) according to step (6) and (7) fusion Method carries out binocular fusion, obtains the color similarity figure SI (x, y) and SQ (x, y) in the intermediate channel I and the channel Q.
The channel I of middle reference and distorted image that step (13) obtains step (12) and the color similarity in the channel Q Figure is multiplied with what the stereoscopic vision Saliency maps that step (5) are established were put pixel-by-pixel, obtains the color of vision significance enhancing Information Saliency maps, and then obtain the Color Similarity Measurement index Index in the channel I and the channel Q3And Index4, expression It is as follows:
Wherein, M and N indicates the length and wide pixel number of image.
The disparity map of reference stereo pairs and distortion stereo pairs that step (14) is obtained using step (4) extracts Depth characteristic information, and measurement is made to the distortion level of the disparity map of distortion stereo pairs;Using the side of pixel domain error Method extracts the similitude of the depth characteristic information of reference and distortion stereo pairs, as reaction distortion stereo pairs in parallax Figure is improved quality the index of distortion level, and expression formula is as follows:
Wherein, Dref(x, y) represents the disparity map of reference picture, Ddis(x, y) represents the disparity map of distorted image, mean () It is mean function, Index5Indicate the similarity measurements figureofmerit of depth characteristic information;
Step (15) integration step (10), the Measure Indexes Index of 5 visual correlations obtained in (13) and (14)1- Index5, it is supported vector machine SVR training prediction, obtains optimum prediction model, and be mapped as objectively evaluating for picture quality Score.
Wherein, the step (4) disparity map extracting method includes the following steps:
Step (4.1) will be referred to respectively and the right view all pixels point level of distortion stereo image pair moves to right n times, The step-length moved every time is s pixel, the k width amendment right view I after obtaining horizontal move to rightR((x+i*s), y), (i=1,2 ... K), then k=n/s, each width amendment right view is corresponding marked as i, and (i=1,2 ... k);
Step (4.2) calculates separately the left view of stereo image pair using structural similarity algorithm SSIM and k width is corrected The structural similarity of right view obtains k width structural similarity figure, (Z.Wang, A.C.Bovik, H.R.Sheikh, and E.P.Simoncelli,“Image quality assessment:from error visibility to structural similarity,” IEEE Transactions on Image Processing,vol.13,no.4,pp.600-612, 2004), SSIM algorithm expression formula is as follows:
SSIM (x, y)=[l (x, y)]α[c(x,y)]β[s(x,y)]γ (4-1)
Wherein, μxAnd μyRespectively indicate a corresponding image in the left view and amendment right view image of stereo pairs Mean value in block;,σxAnd σyRespectively indicate a corresponding image block in the left view and amendment right view image of stereo pairs Interior variance yields;σxyIn covariance between the left view of stereo pairs and an image block of amendment right view image Covariance, C1=(k1L)2、C2=(k2L)2、C3=C2/2、k1、k2It is the positive number much smaller than 1, L is the maximum gray scale of image Grade, preventing denominator is zero, and α, β, γ indicate the coefficient of weight between three functions, is all larger than zero.l(x,y),c(x,y),s(x,y) Respectively luminance function, contrast function and structural similarity function of the stereo-picture in an image block;
Step (4.3) takes in its k width structural similarity figure and locally ties for each pixel p (x, y) of left view The maximum width of structure similarity is corresponded to marked as i, (i=1,2 ... k), then i is the corresponding parallax value of p (x, y) pixel, It is recorded as d (x, y), parallax value can be constructed for each pixel, to form disparity map D.
As a preferred solution of the present invention, in step (4) vision significance figure extracting method specifically:
Vision significance figure extracting method using spectrum residual error vision significance model (SR) (X.Hou and L.Zhang, “Saliency detection:A spectral residual approach,”in Proc.20th IEEE Conf.Comput. Vis.Pattern Recognit., Minneapolis, MN, USA, pp.1-8, Jun.2007), it is specific interior Hold as follows:
Given piece image I (x, y), has:
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () expression takes real part operation, Angle () Expression takes argument operation, and A (f) is magnitude function, and P (f) is phase value function, and S (x, y) is significant to be obtained by spectrum residual error method Property figure.In addition to this, g (x, y) is gauss low frequency filter, hn(f) it is local mean value filter, expression formula is as follows:
Wherein, σ is the standard deviation in probability distribution;
The left and right view of reference picture pair is regarded by the left and right that the method for spectrum residual error can respectively obtain reference picture pair Feel Saliency maps.
Wherein, the step (15) is trained regression forecasting using the method for support vector regression (SVR), obtains most Good prediction model specifically:
The method that SVR training prediction technique specifically uses 5- folding cross validation is trained and tests to model, specific side Case is as follows:
Sample is divided into mutually disjoint five parts by step (15.1) at random, selects wherein four parts of progress SVR training to obtain Best model is obtained, then remaining portion is applied on the model and is tested, corresponding objective quality value is obtained and comes to master Appearance quality is predicted that then five parts of samples can carry out five training predictions;
Step (15.2) repeats the operation of step (15.1) 1000 times, and the intermediate value of all experimental results is taken to carry out table Levy the performance of proposed model;
Expression is as follows:
Q=SVR (Index1,Index2,…,Indexn) (15-1)
Wherein, Q is evaluating objective quality score.
Beneficial effects of the present invention:
The present invention assists to extract edge, texture and color information feature by 3D vision significance figure, to realize to solid The mapping of picture quality is realized and is objectively evaluated to distortion stereo pairs quality.The experimental results showed that being mentioned based on the present invention Method has good consistency to the evaluation performance of stereo image quality and subjective assessment out, better than many typical perspective views Image quality evaluation method.
Detailed description of the invention
Fig. 1 is that the present invention is based on visual saliency maps as that feature enhances is complete with reference to objective evaluation method for quality of stereo images Schematic diagram.
Specific embodiment
The method of the present invention is described further with reference to the accompanying drawing.
Step (1) successively reads in the 3D LIVE image data of texas,U.S university Austin using Matlab software The reference stereo pairs of stage I and stage II and corresponding distortion stereo pairs in library, wherein each stereo pairs point It Bao Kuo not left and right view image.
Step (2) converts the color space of the stereo pairs in step (1), is converted into from rgb color space YIQ color space, wherein the channel Y shows that the gray component of image, I and Q indicate the colour component of image.Specific conversion formula It is as follows:
Step (3) constructs Log Gabor filter model, to the gray scale perspective view obtained in step (2) by the channel Y As to convolution algorithm processing is carried out, respectively obtaining reference and being distorted the energy response figure of stereo image pair or so view;
Log Gabor filter hLGExpression formula it is as follows:
Wherein, f0And θ0Indicate centre frequency and the azimuth of Log Gabor filter, σθAnd σfRespectively represent filter Azimuth bandwidth and radial bandwidth, f and θ respectively represent radial coordinate and the azimuth of filter.Wherein, σθ=π/18, σf= 0.75, f0=1/6, θ0=0, f=0, π/4, π/3,3 π/4, θ=0, π/5,2 π/5,3 π/5,4 π/5.Thus 4 × 5=20 is obtained The local energy of a LoG Gabor filter energy response figure, the response of Log Gabor filter is defined as the energy between each scale The maximum value of amount, and the local energy in each scale is defined as each azimuth and corresponds to the sum of local energy;
By Log Gabor filter and after referring to and being distorted stereo image pair or so view progress convolution, obtain corresponding Energy response figure F (x, y), expression formula is as follows:
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs gray component,For convolution fortune It calculates;
Step (4) extracts parallax with distortion stereo pairs to the gray reference stereo pairs that step (2) obtains respectively Scheme Dref(x, y) and Ddis(x, y) utilizes the left and right view for the gray reference stereo pairs that step (2) obtains residual based on composing The conspicuousness model of difference extracts left and right Saliency maps SL respectivelysr(x, y) and SRsr(x,y);
Step (5) constructs 3D vision significance figure S3D(x, y), specific expression formula are as follows:
S3D(x, y)=ω1×SLsr(x,y)+ω2×SRsr(x,y)+ω3×CB(x,y)+ω4×Dref(x,y) (5-2)
Wherein ω14For different weight factors,
CB (x, y) indicates central point offsetting mechanism, wherein ω12=0.45, ω34=0.05.
The right view for the gray scale stereo pairs that step (6) obtains step (2) is according to the parallax obtained in step (4) The level that the parallax value of figure carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), Y), it is then based on the available left view of Log Gabor filter model described in step (3) and calibrates the energy sound of right view Ying Tu calculates normalized left view weight map WL(x, y) and calibration right view weight map WR((x+d), y), expression It is as follows:
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy sound of the left view that step (3) obtains and calibration right view Ying Tu, d are the disparity map D that step (4) are calculatedrefThe parallax value of respective coordinates in (x, y);
The left view and step (6) of gray reference and distortion stereo pairs that step (7) is based in step (2) obtain Reference and be distorted stereo pairs calibration right view and normalized left view weight map and calibration right view weight map, It is realized using binocular view Fusion Model to the image co-registration of stereo-picture, respectively obtains reference and distortion intermediate image; The formula of binocular view fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (7-1)
Wherein, CI (x, y) is the fused intermediate image of binocular view, IL(x, y) and IR((x+d), y) is respectively ash Spend the left view and calibration right view of stereo pairs;
Step (8) is referred to the middle gray that step (7) obtains and distorted image extracts edge and textural characteristics respectively;
The extraction of marginal information feature carries out process of convolution by Sobel operator and by altimetric image, obtains comprising edge contour The gradient map of information, the expression formula using the marginal information feature of Sobel operator extraction middle reference and distorted image are as follows:
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, GxAnd GyIt is 3 × 3 Sobel water Flat die plate and vertical formwork are respectively intended to the horizontal edge and vertical edge of detection image, and template expression formula is as follows:
The extraction of texture information feature.Using local binary patterns LBP, the expression formula of LBP is as follows:
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, from center The front-right of pixel rotates counterclockwise is followed successively by 0,1,2 ..., and P, x and y represent the coordinate value of central pixel point, and P indicates adjacent The number of pixel, P=8, sgn (x) are jump functions;
The visual information feature and step (5) for the middle reference and distorted image that step (9) extracts step (8) are established The multiplication put pixel-by-pixel of vision significance figure, obtain the visual information feature of vision significance enhancing, expression It is as follows:
GMSR(x, y)=GMR(x,y)*S3D(x,y)GMSD(x, y)=GMD(x,y)*S3D(x,y) (9-1)
TISR(x, y)=TIR(x,y)*S3D(x,y)TISD(x, y)=TID(x,y)*S3D(x,y) (9-2)
Wherein, GMR(x, y) and TIR(x, y) is the edge and texture feature information of middle reference image, GM respectivelyD(x,y) And TID(x, y) is the edge and texture feature information of intermediate distorted image respectively;S3DView after the integration obtained for step (5) Feel Saliency maps;
Step (10) carries out similarity measurement, expression to the visual information feature for the conspicuousness enhancing extracted in step (9) Formula is as follows:
Wherein, GMSR(x, y) and TISR(x, y) indicates the edge and texture information of the conspicuousness enhancing of middle reference image Feature, GMSD(x, y) and TISD(x, y) indicates the edge and texture information of the conspicuousness enhancing of intermediate distorted image, WithRespectively the edge of middle reference/distorted image conspicuousness enhancing and texture information feature is equal Value, M and N indicate the length and wide pixel number of image, Index1And Index2Respectively represent edge and texture information feature Similarity measurements figureofmerit;
It is special that step (11) extracts color information by the color stereo image pair that the channel I and Q obtains from step (2) Sign carries out similarity measurement to the left and right view of color stereo-picture respectively, obtains the color similarity figure under respective channel, table It is as follows up to formula:
Wherein, ILR(x, y) and QLR(x, y) indicates the color under the channel I and the channel Q with reference to stereo pairs left view Hum pattern, ILD(x, y) and QLD(x, y) indicates the color information figure under the channel I and the channel Q of distortion stereo pairs left view, SIL(x, y) and SQL(x, y) respectively indicates the color similarity figure under the channel I and the channel Q of stereo pairs left view.Right view The method that the color similarity figure of figure obtains is consistent with left view color similarity figure, T1And T2For constant, preventing denominator is zero. Here, T1And T2It is 0.5.
The color similarity figure for the left and right view that step (12) obtains step (11) according to step (6) and (7) fusion Method carries out binocular fusion, obtains the color similarity figure SI (x, y) and SQ (x, y) in the intermediate channel I and the channel Q.
The channel I of middle reference and distorted image that step (13) obtains step (12) and the color similarity in the channel Q Figure is multiplied with what the stereoscopic vision Saliency maps that step (5) are established were put pixel-by-pixel, obtains the color of vision significance enhancing Information Saliency maps, and then obtain the Color Similarity Measurement index Index in the channel I and the channel Q3And Index4, expression It is as follows:
Wherein, M and N indicates the length and wide pixel number of image.
The disparity map of reference stereo pairs and distortion stereo pairs that step (14) is obtained using step (4) extracts Depth characteristic information, and measurement is made to the distortion level of the disparity map of distortion stereo pairs;Using the side of pixel domain error Method extracts the similitude of the depth characteristic information of reference and distortion stereo pairs, as reaction distortion stereo pairs in parallax Figure is improved quality the index of distortion level, and expression formula is as follows:
Wherein, Dref(x, y) represents the disparity map of reference picture, Ddis(x, y) represents the disparity map of distorted image, mean () It is mean function, Index5Indicate the similarity measurements figureofmerit of depth characteristic information;
Step (15) integration step (10), the Measure Indexes Index of 5 visual correlations obtained in (13) and (14)1- Index5, it is trained prediction with support vector regression (SVR), obtains optimum prediction model, and be mapped as picture quality Objective assessment score.
Wherein, the complete of view-based access control model specific image feature enhancing according to claim 1 refers to stereo image quality Method for objectively evaluating, it is characterised in that described step (4) the disparity map extracting method includes the following steps:
Step (4.1) will be referred to respectively and the right view all pixels point level of distortion stereo image pair moves to right n times, The step-length moved every time is s pixel, the k width amendment right view I after obtaining horizontal move to rightR((x+i*s), y), (i=1,2 ... K), then k=n/s, in this s=1, n=25, then k=25.Each width amendment right view is corresponding marked as i, (i=1,2 ... k);
Step (4.2) calculates separately the left view of stereo image pair using structural similarity algorithm SSIM and k width is corrected The structural similarity of right view obtains k width structural similarity figure (Z.Wang, A.C.Bovik, H.R.Sheikh, and E.P.Simoncelli,“Image quality assessment:from error visibility to structural similarity,” IEEE Transactions on Image Processing,vol.13,no.4,pp.600-612, 2004), SSIM algorithm expression formula is as follows:
SSIM (x, y)=[l (x, y)]α[c(x,y)]β[s(x,y)]γ (4-1)
Wherein, μxAnd μyRespectively indicate a corresponding image in the left view and amendment right view image of stereo pairs Mean value in block;,σxAnd σyRespectively indicate a corresponding image block in the left view and amendment right view image of stereo pairs Interior variance yields;σxyIn covariance between the left view of stereo pairs and an image block of amendment right view image Covariance, C1=(k1L)2、C2=(k2L)2、C3=C2/2、k1、k2It is the positive number much smaller than 1, L is the maximum gray scale of image Grade, preventing denominator is zero, and α, β, γ indicate the coefficient of weight between three functions, zero is all larger than, in this α=β=γ=1, C1= 6.5025 C2=58.5225, C2=29.2612.L (x, y), c (x, y), s (x, y) are respectively stereo-picture in an image block In luminance function, contrast function and structural similarity function;
Step (4.3) takes in its k width structural similarity figure and locally ties for each pixel p (x, y) of left view The maximum width of structure similarity is corresponded to marked as i, (i=1,2 ... k), then i is the corresponding parallax value of p (x, y) pixel, It is recorded as d (x, y), parallax value can be constructed for each pixel, to form disparity map D.
The complete of view-based access control model specific image feature according to claim 1 enhancing is commented with reference to stereo image quality is objective Valence method, it is characterised in that the extracting method of vision significance figure in the step (4) specifically:
Vision significance figure extracting method using spectrum residual error vision significance model (SR) (X.Hou and L.Zhang, “Saliency detection:A spectral residual approach,”in Proc.20th IEEE Conf.Comput. Vis.Pattern Recognit., Minneapolis, MN, USA, pp.1-8, Jun.2007), it is specific interior Hold as follows:
Given piece image I (x, y), has:
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () expression takes real part operation, Angle () Expression takes argument operation, and A (f) is magnitude function, and P (f) is phase value function, and S (x, y) is significant to be obtained by spectrum residual error method Property figure.In addition to this, g (x, y) is gauss low frequency filter, hn(f) it is local mean value filter, expression formula is as follows:
Wherein, σ is the standard deviation in probability distribution, σ=1.5;
The left and right view of reference picture pair is regarded by the left and right that the method for spectrum residual error can respectively obtain reference picture pair Feel Saliency maps.
Wherein, the complete of view-based access control model specific image feature enhancing according to claim 1 refers to stereo image quality Method for objectively evaluating, it is characterised in that the step (15) is trained recurrence using the method for support vector regression (SVR) Prediction obtains optimum prediction model specifically:
The method that SVR training prediction technique specifically uses 5- folding cross validation is trained and tests to model, specific side Case is as follows:
Sample is divided into mutually disjoint five parts by step (15.1) at random, selects wherein four parts of progress SVR training to obtain Best model is obtained, then remaining portion is applied on the model and is tested, corresponding objective quality value is obtained and comes to master Appearance quality is predicted that then five parts of samples can carry out five training predictions;
Step (15.2) repeats the operation of step (15.1) 1000 times, and the intermediate value of all experimental results is taken to carry out table Levy the performance of proposed model;
Expression is as follows:
Q=SVR (Index1,Index2,…,Indexn) (15-1)
Wherein, Q is evaluating objective quality score.
The complete of view-based access control model specific image feature according to claim 1 enhancing is commented with reference to stereo image quality is objective Valence method has carried out following experiment to verify the superior function of algorithm of the present invention, is labeled as step (16).
Database is the LIVE stereo image quality rating database stage using two large database concepts being widely adopted at present I and stage II (http://live.ece.utexas.edu/research/Quality/live_3dimage.html).Stage I includes 20 width reference pictures pair, and the symmetrical different distorted image of 365 width is then derived from this 20 width reference picture pair Right, stage II includes that 8 width reference pictures pair are symmetrically lost by this 8 width image to 120 width are derived unlike stage I True image to and 240 asymmetrical image faults pair, this is to test proposed stereo image quality evaluation algorithms pair In the evaluation and test effect of asymmetric distorted image.Either stage I or stage II, the type for the distorted image that they are included are Five kinds below: JPEG compression (JPEG), JPEG2000 compression (JP2K), Gaussian Blur (GB), white Gaussian noise (WN) and fast It declines (FF).Performance evaluation can be carried out by the image to every kind of type of distortion respectively in experimentation, it finally can be to all distortion maps As carrying out total performance evaluation.One good three-dimensional Environmental Evaluation Model will not only require the overall performance on all distorted images It is good, and also to be got well in the performance of every kind of type of distortion image.
Experimental index selection Pearson correlation coefficients (Pearson ' s linear correlation coefficient, PLCC), Spearman's correlation coefficient (Spearman ' s rank ordered correlation coefficient, SROCC) and root-mean-square error (root-mean-squared error, RMSE), the objective quality proposed for inspection institute The performance of evaluation method.Wherein PLCC and RMSE is the accuracy for predicting Objective image quality evaluation method, and SRCC is then It is the monotonicity for predicting method for objectively evaluating.The PLCC and SRCC of one algorithm are higher, and RMSE is lower, represent the objective matter The algorithm of amount evaluation has better accuracy and robustness.The calculation formula of PLCC, SRCC and RMSE are as follows:
Wherein, n is total number of images amount, xiAnd yiRespectively subjective quality scores and prediction evaluating objective quality score, Xi And YiRespectively xiAnd yiRanking in subjective quality scores and objective quality scores.
Last experimental result is listed in table, and table 1 indicates the method for the invention in LIVE 3D database stage I and rank Overall performance on section II, table 2 list the performance of PLCC, SROCC and RMSE for different type of distortion.Experimental result Show either on LIVE 3D database stage I or stage II, algorithm of the invention all achieves prediction effect well Fruit has preferable subjective consistency.
Overall performance of 1 the method for the invention of table on LIVE 3D database
PLCC, SROCC and RMSE of 2 the method for the invention of table each type of distortion on LIVE 3D database

Claims (4)

1. a kind of the complete of view-based access control model specific image feature enhancing refers to objective evaluation method for quality of stereo images, it is characterised in that The following steps are included:
Step (1) input is with reference to stereo pairs and distortion stereo pairs, wherein each stereo pairs respectively include left view Figure and right view image;
Step (2) converts the color space of the stereo pairs in step (1), is converted into YIQ from rgb color space Color space, wherein the channel Y shows that the gray component of image, I and Q indicate the colour component of image;Specific conversion formula is such as Shown in lower:
Step (3) constructs Log Gabor filter model, to the gray scale stereo pairs obtained in step (2) by the channel Y Convolution algorithm processing is carried out, reference is respectively obtained and is distorted the energy response figure of stereo image pair or so view;
Log Gabor filter hLGExpression formula it is as follows:
Wherein, f0And θ0Indicate centre frequency and the azimuth of Log Gabor filter, σθAnd σfRespectively represent the orientation of filter Angle bandwidth and radial bandwidth, f and θ respectively represent radial coordinate and the azimuth of filter;
By Log Gabor filter and after referring to and being distorted stereo image pair or so view progress convolution, corresponding energy is obtained It measures response diagram F (x, y), expression formula is as follows:
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs gray component,For convolution algorithm;
Step (4) extracts disparity map with distortion stereo pairs to the gray reference stereo pairs that step (2) obtains respectively Dref(x, y) and Ddis(x, y) utilizes based on spectrum residual error the left and right view for the gray reference stereo pairs that step (2) obtains Conspicuousness model extract left and right Saliency maps SL respectivelysr(x, y) and SRsr(x,y);
Step (5) constructs 3D vision significance figure S3D(x, y), specific expression formula are as follows:
S3D(x, y)=ω1×SLsr(x,y)+ω2×SRsr(x,y)+ω3×CB(x,y)+ω4×Dref(x,y) (5-2)
Wherein ω14For different weight factors, CB (x, y) indicates central point offsetting mechanism;
The right view for the gray scale stereo pairs that step (6) obtains step (2) is according to the disparity map obtained in step (4) The level that parallax value carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), y), so Energy response figure based on the available left view of Log Gabor filter model described in step (3) and calibration right view afterwards, Calculate normalized left view weight map WL(x, y) and calibration right view weight map WR((x+d), y), expression is as follows:
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy response of the left view that step (3) obtains and calibration right view Figure, d are the disparity map D that step (4) are calculatedrefThe parallax value of respective coordinates in (x, y);
The ginseng that the left view and step (6) of gray reference and distortion stereo pairs that step (7) is based in step (2) obtain Examine and be distorted stereo pairs calibration right view and normalized left view weight map and calibration right view weight map, utilize Binocular view Fusion Model is realized to the image co-registration of stereo-picture, and reference and distortion intermediate image is respectively obtained;Binocular The formula of view fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (7-1)
Wherein, CI (x, y) is the fused intermediate image of binocular view, IL(x, y) and IR((x+d), y) is respectively that gray scale is vertical The left view and calibration right view of body image pair;
Step (8) is referred to the middle gray that step (7) obtains and distorted image extracts edge and textural characteristics respectively;
The extraction of marginal information feature: process of convolution is carried out by Sobel operator and by altimetric image, is obtained comprising edge contour information Gradient map, the expression formula using the marginal information feature of Sobel operator extraction middle reference and distorted image is as follows:
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, GxAnd GyIt is 3 × 3 horizontal mould of Sobel Plate and vertical formwork are respectively intended to the horizontal edge and vertical edge of detection image, and template expression formula is as follows:
The extraction of texture information feature: using local binary patterns LBP, and the expression formula of LBP is as follows:
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, from center pixel The front-right of point rotates counterclockwise is followed successively by 0,1,2 ..., and P, x and y represent the coordinate value of central pixel point, and P indicates neighbor pixel Number, sgn (x) is jump function;
The view that the visual information feature and step (5) for the middle reference and distorted image that step (9) extracts step (8) are established Feel the multiplication that Saliency maps are put pixel-by-pixel, obtain the visual information feature of vision significance enhancing, expression is as follows:
GMSR(x, y)=GMR(x,y)*S3D(x,y) GMSD(x, y)=GMD(x,y)*S3D(x,y) (9-1)
TISR(x, y)=TIR(x,y)*S3D(x,y) TISD(x, y)=TID(x,y)*S3D(x,y) (9-2)
Wherein, GMR(x, y) and TIR(x, y) is the edge and texture feature information of middle reference image, GM respectivelyD(x, y) and TID (x, y) is the edge and texture feature information of intermediate distorted image respectively;S3DVision after the integration obtained for step (5) is significant Property figure;
Step (10) carries out similarity measurement to the visual information feature for the conspicuousness enhancing extracted in step (9), and expression formula is such as Under:
Wherein, GMSR(x, y) and TISR(x, y) indicates the edge and texture information feature of the conspicuousness enhancing of middle reference image, GMSD(x, y) and TISD(x, y) indicates the edge and texture information of the conspicuousness enhancing of intermediate distorted image, WithRespectively the edge of middle reference/distorted image conspicuousness enhancing and texture information feature is equal Value, M and N indicate the length and wide pixel number of image, Index1And Index2Respectively represent edge and texture information feature Similarity measurements figureofmerit;
Step (11) extracts color information feature by the color stereo image pair that the channel I and Q obtains from step (2), point The other left and right view to color stereo-picture carries out similarity measurement, obtains the color similarity figure under respective channel, expression formula It is as follows:
Wherein, ILR(x, y) and QLR(x, y) indicates the color information under the channel I and the channel Q with reference to stereo pairs left view Figure, ILD(x, y) and QLD(x, y) indicates the color information figure under the channel I and the channel Q of distortion stereo pairs left view, SIL (x, y) and SQL(x, y) respectively indicates the color similarity figure under the channel I and the channel Q of stereo pairs left view, right view The obtained method of color similarity figure it is consistent with left view color similarity figure, T1And T2For constant, preventing denominator is zero;
The color similarity figure for the left and right view that step (12) obtains step (11) according to step (6) and (7) fusion method Binocular fusion is carried out, the color similarity figure SI (x, y) and SQ (x, y) in the intermediate channel I and the channel Q are obtained;
The color similarity figure in the channel I of middle reference and distorted image that step (13) obtains step (12) and the channel Q and The multiplication that the stereoscopic vision Saliency maps that step (5) is established are put pixel-by-pixel obtains the color information of vision significance enhancing Saliency maps, and then obtain the Color Similarity Measurement index Index in the channel I and the channel Q3And Index4, expression is such as Under:
Wherein, M and N indicates the length and wide pixel number of image;
The disparity map of reference stereo pairs and distortion stereo pairs that step (14) is obtained using step (4) extracts depth Characteristic information, and measurement is made to the distortion level of the disparity map of distortion stereo pairs;It is mentioned using the method for pixel domain error It takes reference and is distorted the similitude of the depth characteristic information of stereo pairs, as reaction distortion stereo pairs on disparity map The index of quality distortion degree, expression formula are as follows:
Wherein, Dref(x, y) represents the disparity map of reference picture, Ddis(x, y) represents the disparity map of distorted image, and mean () is equal Value function, Index5Indicate the similarity measurements figureofmerit of depth characteristic information;
Step (15) integration step (10), the Measure Indexes Index of 5 visual correlations obtained in (13) and (14)1- Index5, it is supported vector regression (SVR) and is trained prediction, obtains optimum prediction model, and be mapped as picture quality Objective assessment score.
2. the complete of view-based access control model specific image feature enhancing according to claim 1 is objectively evaluated with reference to stereo image quality Method, it is characterised in that described step (4) the disparity map extracting method includes the following steps:
Step (4.1) will be referred to respectively and the right view all pixels point level of distortion stereo image pair moves to right n times, every time Mobile step-length is s pixel, the k width amendment right view I after obtaining horizontal move to rightR((x+i*s), y), wherein i=1,2 ... K, then k=n/s, each width amendment right view are corresponding marked as i, i=1,2 ... k;
Step (4.2) calculates separately the left view of stereo image pair using structural similarity algorithm SSIM and k width corrects right view The structural similarity of figure obtains k width structural similarity figure, and SSIM algorithm expression formula is as follows:
SSIM (x, y)=[l (x, y)]α[c(x,y)]β[s(x,y)]γ (4-1)
Wherein, μxAnd μyIt respectively indicates in the left view and amendment right view image of stereo pairs in a corresponding image block Mean value;,σxAnd σyIt respectively indicates in the left view and amendment right view image of stereo pairs in a corresponding image block Variance yields;σxyThe association side in covariance between the left view of stereo pairs and an image block of amendment right view image Difference, C1=(k1L)2、C2=(k2L)2、C3=C2/2、k1、k2It is the positive number much smaller than 1, L is the maximum gray scale of image, is prevented Only denominator is zero, and α, β, γ indicate the coefficient of weight between three functions, is all larger than zero;L (x, y), c (x, y), s (x, y) are respectively Luminance function, contrast function and structural similarity function of the stereo-picture in an image block;
Step (4.3) takes partial structurtes phase in its k width structural similarity figure for each pixel p (x, y) of left view It is worth a maximum width like property, corresponding marked as i, i=1,2 ... k, then i is the corresponding parallax value of p (x, y) pixel, is recorded as d (x, y) can construct parallax value for each pixel, to form disparity map D.
3. the complete of view-based access control model specific image feature enhancing according to claim 1 is objectively evaluated with reference to stereo image quality Method, it is characterised in that the extracting method of described step (4) the vision significance figure specifically:
For vision significance figure extracting method using the vision significance model (SR) of spectrum residual error, particular content is as follows:
Given piece image I (x, y), has:
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () expression takes real part operation, and Angle () is indicated Argument operation is taken, A (f) is magnitude function, and P (f) is phase value function, and S (x, y) is the conspicuousness obtained by composing residual error method Figure;In addition to this, g (x, y) is gauss low frequency filter, hn(f) it is local mean value filter, expression formula is as follows:
Wherein, σ is the standard deviation in probability distribution;
The left and right vision that reference picture pair can be respectively obtained by the method for spectrum residual error to the left and right view of reference picture pair is aobvious Work property figure.
4. the complete of view-based access control model specific image feature enhancing according to claim 1 is objectively evaluated with reference to stereo image quality Method, it is characterised in that the step (15) is trained regression forecasting using the method for support vector regression (SVR), obtains Optimum prediction model specifically:
The method that SVR training prediction technique specifically uses 5- folding cross validation is trained and tests to model, and concrete scheme is such as Under:
Sample is divided into mutually disjoint five parts by step (15.1) at random, selects wherein four parts of progress SVR training to obtain most Then remaining portion is applied on the model and is tested by good model, obtain corresponding objective quality value and come to subjective matter Amount is predicted that then five parts of samples can carry out five training predictions;
Step (15.2) repeats the operation of step (15.1) 1000 times, takes the intermediate value of all experimental results to characterize It is proposed the performance of model;
Expression is as follows:
Q=SVR (Index1,Index2,L,Indexn) (15-1)
Wherein, Q is evaluating objective quality score.
CN201811109216.6A 2018-09-21 2018-09-21 Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement Active CN109523506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811109216.6A CN109523506B (en) 2018-09-21 2018-09-21 Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811109216.6A CN109523506B (en) 2018-09-21 2018-09-21 Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement

Publications (2)

Publication Number Publication Date
CN109523506A true CN109523506A (en) 2019-03-26
CN109523506B CN109523506B (en) 2021-03-26

Family

ID=65771646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811109216.6A Active CN109523506B (en) 2018-09-21 2018-09-21 Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement

Country Status (1)

Country Link
CN (1) CN109523506B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399887A (en) * 2019-07-19 2019-11-01 合肥工业大学 The representative Color Picking method of view-based access control model conspicuousness and statistics with histogram technology
CN110399881A (en) * 2019-07-11 2019-11-01 深圳大学 A kind of quality enhancement method and device based on binocular stereo image end to end
CN110930398A (en) * 2019-12-09 2020-03-27 嘉兴学院 Log-Gabor similarity-based full-reference video quality evaluation method
CN111179238A (en) * 2019-12-24 2020-05-19 东华大学 Subset confidence ratio dynamic selection method for subset-oriented guidance consistency enhancement evaluation
CN112132774A (en) * 2019-07-29 2020-12-25 方玉明 Quality evaluation method of tone mapping image
CN112581461A (en) * 2020-12-24 2021-03-30 深圳大学 No-reference image quality evaluation method and device based on generation network
CN112734733A (en) * 2021-01-12 2021-04-30 天津大学 Non-reference image quality monitoring method based on channel recombination and feature fusion
CN114067006A (en) * 2022-01-17 2022-02-18 湖南工商大学 Screen content image quality evaluation method based on discrete cosine transform
US11810530B2 (en) 2019-04-02 2023-11-07 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for display-brightness adjustment and related devices

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130155192A1 (en) * 2011-12-15 2013-06-20 Industrial Technology Research Institute Stereoscopic image shooting and display quality evaluation system and method applicable thereto
WO2013105720A1 (en) * 2012-01-10 2013-07-18 에스케이플래닛 주식회사 Device and method for analyzing quality of three-dimensional stereoscopic image
CN107330873A (en) * 2017-05-05 2017-11-07 浙江大学 Objective evaluation method for quality of stereo images based on multiple dimensioned binocular fusion and local shape factor
CN107578404A (en) * 2017-08-22 2018-01-12 浙江大学 The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images
CN107610093A (en) * 2017-08-02 2018-01-19 西安理工大学 Full-reference image quality evaluating method based on similarity feature fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130155192A1 (en) * 2011-12-15 2013-06-20 Industrial Technology Research Institute Stereoscopic image shooting and display quality evaluation system and method applicable thereto
WO2013105720A1 (en) * 2012-01-10 2013-07-18 에스케이플래닛 주식회사 Device and method for analyzing quality of three-dimensional stereoscopic image
CN107330873A (en) * 2017-05-05 2017-11-07 浙江大学 Objective evaluation method for quality of stereo images based on multiple dimensioned binocular fusion and local shape factor
CN107610093A (en) * 2017-08-02 2018-01-19 西安理工大学 Full-reference image quality evaluating method based on similarity feature fusion
CN107578404A (en) * 2017-08-22 2018-01-12 浙江大学 The complete of view-based access control model notable feature extraction refers to objective evaluation method for quality of stereo images

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11810530B2 (en) 2019-04-02 2023-11-07 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for display-brightness adjustment and related devices
CN110399881A (en) * 2019-07-11 2019-11-01 深圳大学 A kind of quality enhancement method and device based on binocular stereo image end to end
CN110399887B (en) * 2019-07-19 2022-11-04 合肥工业大学 Representative color extraction method based on visual saliency and histogram statistical technology
CN110399887A (en) * 2019-07-19 2019-11-01 合肥工业大学 The representative Color Picking method of view-based access control model conspicuousness and statistics with histogram technology
CN112132774A (en) * 2019-07-29 2020-12-25 方玉明 Quality evaluation method of tone mapping image
CN110930398A (en) * 2019-12-09 2020-03-27 嘉兴学院 Log-Gabor similarity-based full-reference video quality evaluation method
CN110930398B (en) * 2019-12-09 2023-05-09 嘉兴学院 Total reference video quality evaluation method based on Log-Gabor similarity
CN111179238A (en) * 2019-12-24 2020-05-19 东华大学 Subset confidence ratio dynamic selection method for subset-oriented guidance consistency enhancement evaluation
CN111179238B (en) * 2019-12-24 2022-12-20 东华大学 Subset confidence ratio dynamic selection method for underwater image set-oriented guidance consistency enhancement evaluation
CN112581461A (en) * 2020-12-24 2021-03-30 深圳大学 No-reference image quality evaluation method and device based on generation network
CN112734733A (en) * 2021-01-12 2021-04-30 天津大学 Non-reference image quality monitoring method based on channel recombination and feature fusion
CN112734733B (en) * 2021-01-12 2022-11-01 天津大学 Non-reference image quality monitoring method based on channel recombination and feature fusion
CN114067006B (en) * 2022-01-17 2022-04-08 湖南工商大学 Screen content image quality evaluation method based on discrete cosine transform
CN114067006A (en) * 2022-01-17 2022-02-18 湖南工商大学 Screen content image quality evaluation method based on discrete cosine transform

Also Published As

Publication number Publication date
CN109523506B (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN109523506A (en) The complete of view-based access control model specific image feature enhancing refers to objective evaluation method for quality of stereo images
CN107578404B (en) View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images
Battisti et al. Objective image quality assessment of 3D synthesized views
CN109255358B (en) 3D image quality evaluation method based on visual saliency and depth map
Shen et al. Hybrid no-reference natural image quality assessment of noisy, blurry, JPEG2000, and JPEG images
CN100389437C (en) Method and apparatus for modeling film grain patterns in the frequency domain
CN111563418A (en) Asymmetric multi-mode fusion significance detection method based on attention mechanism
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN106303507B (en) Video quality evaluation without reference method based on space-time united information
CN109919959A (en) Tone mapping image quality evaluating method based on color, naturality and structure
CN104123705B (en) A kind of super-resolution rebuilding picture quality Contourlet territory evaluation methodology
CN102333233A (en) Stereo image quality objective evaluation method based on visual perception
CN109831664B (en) Rapid compressed stereo video quality evaluation method based on deep learning
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN105550989B (en) The image super-resolution method returned based on non local Gaussian process
CN102547368A (en) Objective evaluation method for quality of stereo images
CN103354617B (en) Boundary strength compressing image quality objective evaluation method based on DCT domain
CN109345502A (en) A kind of stereo image quality evaluation method based on disparity map stereochemical structure information extraction
Hsia Improved depth image-based rendering using an adaptive compensation method on an autostereoscopic 3-D display for a Kinect sensor
Zhou et al. Reduced-reference quality assessment of point clouds via content-oriented saliency projection
CN108447059A (en) It is a kind of to refer to light field image quality evaluating method entirely
CN116934592A (en) Image stitching method, system, equipment and medium based on deep learning
CN104637060A (en) Image partition method based on neighbor-hood PCA (Principal Component Analysis)-Laplace
CN117671384A (en) Hyperspectral image classification method
CN105721863B (en) Method for evaluating video quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant