CN107578404B - View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images - Google Patents
View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images Download PDFInfo
- Publication number
- CN107578404B CN107578404B CN201710721546.XA CN201710721546A CN107578404B CN 107578404 B CN107578404 B CN 107578404B CN 201710721546 A CN201710721546 A CN 201710721546A CN 107578404 B CN107578404 B CN 107578404B
- Authority
- CN
- China
- Prior art keywords
- image
- view
- stereo
- distorted
- follows
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Processing (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
Complete the invention discloses a kind of extraction of view-based access control model notable feature refers to objective evaluation method for quality of stereo images.The left and right view of stereo pairs is handled to obtain corresponding disparity map;Image co-registration is carried out to the left and right view of stereo pairs respectively and obtains middle reference and distorted image, Saliency maps are referred to and are distorted using spectrum residual error vision significance model, and logical integration obtains vision significance figure, visual information feature is extracted from middle reference and distorted image and depth information feature is extracted from the disparity map of stereo pairs, carry out similarity measurement, obtain the Measure Indexes for each visual information feature that vision significantly increases, it is supported vector machine training prediction, obtain objective quality scores, realize the mapping to stereo image quality, complete the measurement and evaluation of stereo image quality.Picture quality proposed by the invention, which is objectively evaluated, has good consistency with subjective assessment, and performance is better than existing stereo image quality evaluation method.
Description
Technical field
The full reference extracted the invention belongs to technical field of image processing more particularly to a kind of view-based access control model notable feature is vertical
Body method for objectively evaluating image quality.
Background technique
Various distortions always occur in image sampling, transmission, compression and reconstruction processes for image, and the mankind are to figure
As the quality of information also require it is higher and higher, so stereo image quality assessment technique exists with the development of 3D video and technology
It is become more and more important in human society life.It is every since subjective stereo image quality evaluation method requires human viewer
A image is offered to subjective mass fraction.The defects of these methods are time consuming nature and weak projectivity, it is quite necessary to development visitor
The stereo image quality evaluation method of sight, with realize it is automatic, efficiently, objectively evaluate stereo image quality.Ideal objective IQA
Index should have the ability of good forecast image quality, and completely the same with subjective measurement.
Stereo image quality, which objectively evaluates, can be divided into three classifications: entirely with reference to method, half with reference to method and without reference side
Method.The main distinction of these three types of methods is different to the degree of dependence of original reference image.And it is ground due to now both domestic and external
Study carefully hot spot to be concentrated mainly on full reference and evaluate measure without reference stereo image quality, wherein about full reference picture quality
The research of evaluation is more, and technology is comparatively more mature, and the model established on this basis and subjective measurement consistency are higher.
But since stereo-picture vision mode system is still not perfect, three-dimensional image objective quality evaluation is still the heat studied now
Point and difficult point.
Summary of the invention
Complete the invention discloses a kind of extraction of vision notable feature refers to objective evaluation method for quality of stereo images.Its mesh
Be to assist to extract stereo-picture visual signature, and comprehensive visual characteristic information is to realize opposition using vision significance model
The measurement and evaluation of stereo image quality are completed in the mapping of body picture quality.
The technical solution adopted by the present invention is that:
Firstly, the left and right view for referring to and being distorted stereo pairs is handled respectively using structural similarity algorithm,
Obtain corresponding disparity map;The left and right view for referring to and being distorted stereo pairs is carried out respectively using binocular view blending algorithm
Image co-registration obtains middle reference and distorted image.Secondly, utilizing spectrum residual error vision significance to middle reference and distorted image
Model is referred to and is distorted Saliency maps, and integrates vision significance figure by maximizing formula.Then, from middle reference and
Distorted image extracts visual information feature and extracts depth information feature from disparity map, and in the guidance of vision significance figure
Lower progress similarity measurement is assisted, the Measure Indexes of each visual information feature are obtained.Finally, referring to each visual information characteristic measure
Mark is supported vector machine training prediction, realizes mapping to stereo image quality, completes the measurement of stereo image quality and comments
Valence.
The technical solution adopted by the present invention to solve the technical problems is as follows:
Step (1) input is with reference to stereo pairs and distortion stereo pairs, wherein each stereo pairs respectively include
Left view and right view image;
Step (2) constructs Log Gabor filter model, carries out at convolution algorithm to the stereo pairs in step (1)
Reason respectively obtains reference and is distorted the energy response figure of stereo image pair or so view;
The expression formula of Log Gabor filter is as follows:
Wherein, f0And θ0It is centre frequency and the azimuth of Log Gabor filter, σθAnd σfRespectively represent filter
Azimuth bandwidth and radial bandwidth, f and θ respectively represent radial coordinate and the azimuth of filter.
By Log Gabor filter and after referring to and being distorted stereo image pair or so view progress convolution, obtain corresponding
Energy response figure, expression formula is as follows:
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs,For convolution algorithm;
Step (3) extracts disparity map with distortion stereo pairs to the reference stereo pairs that step (1) inputs respectively;
The right view for the stereo pairs that step (4) inputs step (1) is according to the disparity map obtained in step (3)
The level that parallax value carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), y), so
Left view is obtained based on step (2) afterwards and calibrates the energy response figure of right view, calculates normalized left view weight map WL(x,
Y) and calibration right view weight map WR((x+d), y), expression is as follows:
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy sound of the left view that step (2) obtains and calibration right view
The parallax value of respective coordinates in the disparity map D that Ying Tu, d are calculated for step (3);
The ginseng that the left view and step (4) of reference and distortion stereo pairs that step (5) is based in step (1) obtain
Examine and be distorted stereo pairs calibration right view and normalized left view weight map and calibration right view weight map, utilize
Binocular view Fusion Model is realized to the image co-registration of stereo-picture, and reference and distortion intermediate image is respectively obtained;Binocular
The formula of view fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (5-1)
Wherein, CI (x, y) is the fused intermediate image of binocular view, IL(x, y) and IR((x+d), y) is respectively vertical
The left view and calibration right view of body image pair;
The middle reference and distorted image that step (6) obtains step (5) extract reference and distortion vision significance respectively
Figure, and it is integrated, the vision significance figure S after establishing integrationf。
The disparity map of reference stereo pairs and distortion stereo pairs that step (7) is obtained using step (3) extracts deep
Characteristic information is spent, and measurement is made to the distortion level of the disparity map of distortion stereo pairs;Using the method for pixel domain error
The similitude for extracting the depth characteristic information of reference and distortion stereo pairs, as reaction distortion stereo pairs in disparity map
It improves quality the index of distortion level, expression formula is as follows:
Wherein, DrefRepresent the disparity map of reference picture, DdisThe disparity map of distorted image is represented, E () is mean function, ε
For the constant greater than zero, preventing denominator is zero, Index1And Index2It is two similarity measurements figureofmerits of depth characteristic information;
The middle reference and distorted image that step (8) obtains step (5) extract edge and textural characteristics respectively;
Process of convolution is carried out by Prewitt operator and by altimetric image, obtains the gradient map comprising edge contour information, is utilized
The expression formula of the marginal information feature of Prewitt operator extraction middle reference and distorted image is as follows:
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, hxAnd hyIt is 3 × 3 Prewitt
Vertical formwork and horizontal shuttering are respectively intended to the horizontal edge and vertical edge of detection image.Template expression formula is as follows:
The extraction of texture information feature uses local binary patterns LBP, and the expression formula of LBP is as follows:
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, x and y generation
The coordinate value of table central pixel point.Sgn (x) is jump function.
The visual information feature and step (6) for the middle reference and distorted image that step (9) extracts step (8) are established
The multiplication put pixel-by-pixel of vision significance figure, obtain the visual information feature of vision significance enhancing, expression
It is as follows:
GMSR(x, y)=GMR(x,y)*Sf(x,y) GMSD(x, y)=GMD(x,y)*Sf(x,y) (9-1)
TISR(x, y)=TIR(x,y)*Sf(x,y) TISD(x, y)=TID(x,y)*Sf(x,y) (9-2)
ISR(x, y)=IR(x,y)*Sf(x,y) ISD(x, y)=ID(x,y)*Sf(x,y) (9-3)
Wherein, GMR、TIRAnd IRIt is the edge, texture and brightness information of middle reference image, GM respectivelyD、TIDAnd ID
It is the edge, texture and brightness information of intermediate distorted image respectively;SfVision after the integration obtained for step (6) is significant
Property figure;
Step (10) carries out similarity measurement, expression to the visual information feature for the conspicuousness enhancing extracted in step (9)
Formula is as follows:
Wherein, GMSR、TISRAnd ISRIndicate that edge, texture and the luminance information of the conspicuousness enhancing of middle reference image are special
Sign, GMSD、TISDAnd ISDIndicate edge, texture and the luminance information of the conspicuousness enhancing of intermediate distorted image, Index3、Index4
And Index5Respectively represent the similarity measurements figureofmerit at edge, texture and luminance information feature, C4It is a constant greater than zero,
It is zero the purpose is to prevent denominator;
The middle reference and distorted image that step (11) obtains step (5) carry out down-sampled processing, obtain L scale
Middle reference and distorted image, under L scale space middle reference and the same applying step of distorted image (6), (9) and
(10) method carries out the foundation of vision significance figure, the extraction of visual signature and similarity measurement, and N number of similitude is always obtained
Measure Indexes, then N=2L+2;
Down-sampled method is as follows: input piece image obtains filtering image by low-pass filter, then schemes to filtering
As carry out decimation factor be m down-sampled processing, obtain it is down-sampled after image.
Each Measure Indexes obtained in step (12) integration step (8) and (11) are supported vector machine SVR training
Prediction obtains optimum prediction model, and is mapped as the objective assessment score of picture quality.
Wherein, what the view-based access control model notable feature according to patent was extracted is complete with reference to the stereo image quality side of objectively evaluating
Method, it is characterised in that the step (3) includes the following steps:
Step (3.1) will be referred to respectively and the right view all pixels point level of distortion stereo image pair moves to right k times,
The step-length moved every time is s pixel, the k width amendment right view I after obtaining horizontal move to rightR((x+i*s), y), (i=1,2 ...
K), each width amendment right view is corresponding marked as i, and (i=1,2 ... k);
Step (3.2) calculates separately the left view of stereo image pair using structural similarity algorithm SSIM and k width is corrected
The structural similarity of right view obtains k width structural similarity figure.SSIM algorithm expression formula is as follows:
Wherein, μxAnd μyRespectively indicate a corresponding image in the left view and amendment right view image of stereo pairs
Mean value in block;σxAnd σyRespectively indicate a corresponding image block in the left view and amendment right view image of stereo pairs
Interior variance yields;σxyIn covariance between the left view of stereo pairs and an image block of amendment right view image
Covariance.C1And C2For the constant greater than zero, preventing denominator is zero;
Step (3.3) takes partial structurtes in its k width structural similarity figure for each pixel (x, y) of left view
The maximum width of similarity, corresponding marked as i, (i=1,2 ... k), then i*s is the corresponding parallax value of (x, y) pixel, note
Record is d, to construct disparity map D.
The complete of the extraction of the view-based access control model notable feature according to patent refers to objective evaluation method for quality of stereo images,
It is characterized in that the step (6) specifically:
For vision significance figure extracting method using the vision significance model (SR) of spectrum residual error, particular content is as follows:
Given piece image I (x, y), has:
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () expression takes real part operation, Angle ()
Expression takes argument operation, and S (x, y) is the Saliency maps obtained by composing residual error method, and g (x, y) is gauss low frequency filter, hn
(f) it is local mean value filter, expression formula is as follows:
Wherein, σ is the standard deviation in probability distribution;
Vision significance figure is referred to and is distorted by the method for composing residual error to middle reference and distorted image, as the following formula
Shown in method establish integration after vision significance figure;
Sf(x, y)=Max [Sref(x,y),Sdis(x,y)] (6-4)
Wherein, SrefAnd SdisThe respectively vision significance figure of middle reference and distorted image, SfFor the vision after integration
Saliency maps.
The complete of the extraction of the view-based access control model notable feature according to patent refers to objective evaluation method for quality of stereo images,
It is supported vector machine SVR training prediction in step (12) described in being characterized in that, obtains optimum prediction model specifically:
SVR training prediction technique specifically uses 5- folding cross validation to train and test model, concrete scheme are as follows:
Sample is randomly divided into mutually disjoint five parts by step (12.1), selects wherein four parts of progress SVR training to obtain
Then remaining portion is applied to the model up by best model, obtain corresponding objective quality value come to subjective quality into
Row prediction;
Step (12.2) repeats the operation of step (12.1) repeatedly, and all data result average values is taken to characterize
The performance of proposed model;
Expression is as follows:
Q=SVR (Index1,Index2,…,Indexn) (12-1)
Wherein, Q is evaluating objective quality score.
Beneficial effects of the present invention:
The present invention assists to extract visual information feature by vision significance figure, and comprehensive visual information characteristics are to realize pair
The mapping of stereo image quality is realized and is objectively evaluated to distortion stereo pairs quality.The experimental results showed that based on the present invention
Proposed method has good consistency to the evaluation performance of stereo image quality and subjective assessment, is better than conventional stereo image
Quality evaluating method.
Detailed description of the invention
Fig. 1 is that the present invention is based on the full principles with reference to objective evaluation method for quality of stereo images that vision notable feature is extracted
Figure.
Specific embodiment
The method of the present invention is described further with reference to the accompanying drawing.
Step (1) successively reads in the 3D LIVE image data of texas,U.S university Austin using Matlab software
The reference stereo pairs of stage I and stage II and corresponding distortion stereo pairs in library, wherein each stereo pairs point
It Bao Kuo not left and right view image.
Step (2) constructs Log Gabor filter model, carries out at convolution algorithm to the stereo pairs in step (1)
Reason respectively obtains reference and is distorted the energy response figure of stereo image pair or so view;
The expression formula of Log Gabor filter is as follows:
Wherein, f0And θ0It is centre frequency and the azimuth of Log Gabor filter, σθAnd σfRespectively represent filter
Azimuth bandwidth and radial bandwidth, f and θ respectively represent radial coordinate and the azimuth of filter.Wherein, σθ=π/18, σf=
0.75, f0=1/6, θ0=0, f=0, π/4, π/3,3 π/4, θ=0, π/5,2 π/5,3 π/5,4 π/5.Thus 4 × 5=20 is obtained
The local energy of a LoG Gabor filter energy response figure, the response of Log Gabor filter is defined as the energy between each scale
The maximum value of amount, and the local energy in each scale is defined as each azimuth and corresponds to the sum of local energy.
By Log Gabor filter and after referring to and being distorted stereo image pair or so view progress convolution, obtain corresponding
Energy response figure, expression formula is as follows:
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs,Convolution algorithm;
Step (3) extracts disparity map with distortion stereo pairs to the reference stereo pairs that step (1) inputs respectively;
The right view for the stereo pairs that step (4) inputs step (1) is according to the disparity map obtained in step (3)
The level that parallax value carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), y), so
Left view is obtained based on step (2) afterwards and calibrates the energy response figure of right view, calculates normalized left view weight map WL(x,
Y) and calibration right view weight map WR((x+d), y), expression is as follows:
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy sound of the left view that step (2) obtains and calibration right view
The parallax value of respective coordinates in the disparity map D that Ying Tu, d are calculated for step (3);
The ginseng that the left view and step (4) of reference and distortion stereo pairs that step (5) is based in step (1) obtain
Examine and be distorted stereo pairs calibration right view and normalized left view weight map and calibration right view weight map, utilize
Binocular view Fusion Model is realized to the image co-registration of stereo-picture, and reference and distortion intermediate image is respectively obtained;Binocular
The formula of view fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (5-1)
Wherein, CI (x, y) is the fused intermediate image of binocular view, IL(x, y) and IR((x+d), y) is respectively vertical
The left view and calibration right view of body image pair;
The middle reference and distorted image that step (6) obtains step (5) extract reference and distortion vision significance respectively
Figure, and it is integrated, the vision significance figure S after establishing integrationf。
The disparity map of reference stereo pairs and distortion stereo pairs that step (7) is obtained using step (3) extracts deep
Characteristic information is spent, and measurement is made to the distortion level of the disparity map of distortion stereo pairs;Using the method for pixel domain error
The similitude for extracting the depth characteristic information of reference and distortion stereo pairs, as reaction distortion stereo pairs in disparity map
It improves quality the index of distortion level, expression formula is as follows:
Wherein, DrefRepresent the disparity map of reference picture, DdisThe disparity map of distorted image is represented, E () is mean function, ε
For the constant greater than zero, preventing denominator is zero, ε=0.001, Index1And Index2It is two similitudes of depth characteristic information
Measure Indexes;
The middle reference and distorted image that step (8) obtains step (5) extract edge and textural characteristics respectively;
Process of convolution is carried out by Prewitt operator and by altimetric image, obtains the gradient map comprising edge contour information, is utilized
The expression formula of the marginal information feature of Prewitt operator extraction middle reference and distorted image is as follows:
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, hxAnd hyIt is 3 × 3 Prewitt
Vertical formwork and horizontal shuttering are respectively intended to the horizontal edge and vertical edge of detection image.Template expression formula is as follows:
The extraction of texture information feature uses local binary patterns LBP, and the expression formula of LBP is as follows:
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, x and y generation
The coordinate value of table central pixel point.Sgn (x) is jump function.
The visual information feature and step (6) for the middle reference and distorted image that step (9) extracts step (8) are established
The multiplication put pixel-by-pixel of vision significance figure, obtain the visual information feature of vision significance enhancing, expression
It is as follows:
GMSR(x, y)=GMR(x,y)*Sf(x,y) GMSD(x, y)=GMD(x,y)*Sf(x,y) (9-1)
TISR(x, y)=TIR(x,y)*Sf(x,y) TISD(x, y)=TID(x,y)*Sf(x,y) (9-2)
ISR(x, y)=IR(x,y)*Sf(x,y) ISD(x, y)=ID(x,y)*Sf(x,y) (9-3)
Wherein, GMR、TIRAnd IRIt is the edge, texture and brightness information of middle reference image, GM respectivelyD、TIDAnd ID
It is the edge, texture and brightness information of intermediate distorted image respectively;SfVision after the integration obtained for step (6) is significant
Property figure;
Step (10) carries out similarity measurement, expression to the visual information feature for the conspicuousness enhancing extracted in step (9)
Formula is as follows:
Wherein, GMSR、TISRAnd ISRIndicate that edge, texture and the luminance information of the conspicuousness enhancing of middle reference image are special
Sign, GMSD、TISDAnd ISDIndicate edge, texture and the luminance information of the conspicuousness enhancing of intermediate distorted image, Index3、Index4
And Index5Respectively represent the similarity measurements figureofmerit at edge, texture and luminance information feature, C4It is a constant greater than zero,
It is zero, C the purpose is to prevent denominator4=0.5.
The middle reference and distorted image that step (11) obtains step (5) carry out down-sampled processing, obtain L scale
Middle reference and distorted image, L=3.To under L scale space middle reference and the same applying step of distorted image (6),
(9) and the method for (10) carries out the foundation of vision significance figure, the extraction of visual signature and similarity measurement, is always obtained N number of
Similarity measurements figureofmerit, then N=2L+2, then, and N=8.
Down-sampled method is as follows: input piece image obtains filtering image by low-pass filter, then schemes to filtering
As carry out decimation factor be m down-sampled processing, m=2, obtain it is down-sampled after image.
Each Measure Indexes obtained in step (12) integration step (8) and (11) are supported vector machine SVR training
Prediction obtains optimum prediction model, and is mapped as the objective assessment score of picture quality.
Wherein, what the view-based access control model notable feature according to patent was extracted is complete with reference to the stereo image quality side of objectively evaluating
Method, it is characterised in that the step (3) includes the following steps:
Step (3.1) will be referred to respectively and the right view all pixels point level of distortion stereo image pair moves to right k times,
The step-length moved every time is s pixel, the k width amendment right view I after obtaining horizontal move to rightR((x+i*s), y), (i=1,2 ...
K), in this s=1, k=25.Each width amendment right view is corresponding marked as i, and (i=1,2 ... k).
Step (3.2) calculates separately the left view of stereo image pair using structural similarity (SSIM) algorithm and k width is repaired
The structural similarity of positive right view, obtains k width structural similarity figure.SSIM algorithm expression formula is as follows:
Wherein, μxAnd μyRespectively indicate a corresponding image in the left view and amendment right view image of stereo pairs
Mean value in block;σxAnd σyRespectively indicate a corresponding image block in the left view and amendment right view image of stereo pairs
Interior variance yields;σxyIn covariance between the left view of stereo pairs and an image block of amendment right view image
Covariance.C1And C2For the constant greater than zero, preventing denominator is zero, in this C1=6.5025, C2=58.5225.
Step (3.3) takes partial structurtes in its k width structural similarity figure for each pixel (x, y) of left view
The maximum width of similarity, corresponding marked as i, (i=1,2 ... k), then i*s is the corresponding parallax value of (x, y) pixel, note
Record is d, to construct disparity map D.
The complete of the extraction of the view-based access control model notable feature according to patent refers to objective evaluation method for quality of stereo images,
It is characterized in that the step (6) specifically:
For vision significance figure extracting method using the vision significance model (SR) of spectrum residual error, particular content is as follows:
Given piece image I (x, y), has:
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () expression takes real part operation, Angle ()
Expression takes argument operation, and S (x, y) is the Saliency maps obtained by composing residual error method, and g (x, y) is gauss low frequency filter, hn
(f) it is local mean value filter, expression formula is as follows:
Wherein, σ is the standard deviation in probability distribution, σ=1.5;
Vision significance figure is referred to and is distorted by the method for composing residual error to middle reference and distorted image, as the following formula
Shown in method establish integration after vision significance figure;
Sf(x, y)=Max [Sref(x,y),Sdis(x,y)] (6-4)
Wherein, SrefAnd SdisThe respectively vision significance figure of middle reference and distorted image, SfFor the vision after integration
Saliency maps.
The complete of the extraction of the view-based access control model notable feature according to patent refers to objective evaluation method for quality of stereo images,
It is supported vector machine SVR training prediction in step (12) described in being characterized in that, obtains optimum prediction model specifically:
SVR training prediction technique specifically uses 5- folding cross validation to train and test model, concrete scheme are as follows:
Sample is randomly divided into mutually disjoint five parts by step (12.1), selects wherein four parts of progress SVR training to obtain
Then remaining portion is applied to the model up by best model, obtain corresponding objective quality value come to subjective quality into
Row prediction;
Step (12.2) repeats the operation of step (12.1) repeatedly, and all data result average values is taken to characterize
The performance of proposed model;
Expression is as follows:
Q=SVR (Index1,Index2,…,Indexn) (12-1)
Wherein, Q is evaluating objective quality score.
Claims (4)
1. a kind of view-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images, it is characterised in that including
Following steps:
Step (1) input is with reference to stereo pairs and distortion stereo pairs, wherein each stereo pairs respectively include left view
Figure and right view image;
Step (2) constructs Log Gabor filter model, carries out convolution algorithm processing to the stereo pairs in step (1),
It respectively obtains reference and is distorted the energy response figure of stereo image pair or so view;
The expression formula of Log Gabor filter is as follows:
Wherein, f0And θ0It is centre frequency and the azimuth of Log Gabor filter, σθAnd σfRespectively represent the azimuth of filter
Bandwidth and radial bandwidth, f and θ respectively represent radial coordinate and the azimuth of filter;
By Log Gabor filter and after referring to and being distorted stereo image pair or so view progress convolution, corresponding energy is obtained
Response diagram is measured, expression formula is as follows:
Wherein, I (x, y) is the left view or right view of reference and distortion stereo pairs,For convolution algorithm;
Step (3) extracts disparity map with distortion stereo pairs to the reference stereo pairs that step (1) inputs respectively;
The right view for the stereo pairs that step (4) inputs step (1) according to the disparity map obtained in step (3) parallax
The level that value carries out pixel moves to right, the calibration right view I that construction and left view pixel coordinate pair are answeredR((x+d), y), then base
Left view is obtained in step (2) and calibrates the energy response figure of right view, calculates normalized left view weight map WL(x, y) and
Calibrate right view weight map WR((x+d), y), expression is as follows:
Wherein, FL(x, y) and FR((x+d), y) is respectively the energy response of the left view that step (2) obtains and calibration right view
Scheme, the parallax value of respective coordinates in the disparity map D that d is calculated for step (3);
Reference that step (5) is based in step (1) and be distorted stereo pairs left view and the obtained reference of step (4) and
The calibration right view of distortion stereo pairs and normalized left view weight map and calibration right view weight map, utilize binocular
View Fusion Model is realized to the image co-registration of stereo-picture, and reference and distortion intermediate image is respectively obtained;Binocular view
The formula of fusion is as follows:
CI (x, y)=WL(x,y)×IL(x,y)+WR((x+d),y)×IR((x+d),y) (5-1)
Wherein, CI (x, y) is the fused intermediate image of binocular view, IL(x, y) and IR((x+d), y) is respectively perspective view
The left view and calibration right view of picture pair;
The middle reference and distorted image that step (6) obtains step (5) extract reference and distortion vision significance figure respectively,
And it is integrated, the vision significance figure S after establishing integrationf;
It is special that the disparity map of reference stereo pairs and distortion stereo pairs that step (7) is obtained using step (3) extracts depth
Reference breath, and measurement is made to the distortion level of the disparity map of distortion stereo pairs;It is extracted using the method for pixel domain error
With reference to and distortion stereo pairs depth characteristic information similitude, as reaction be distorted stereo pairs matter on disparity map
The index of distortion level is measured, expression formula is as follows:
Wherein, DrefRepresent the disparity map of reference picture, DdisThe disparity map of distorted image is represented, E () is mean function, and ε is big
In zero constant, preventing denominator is zero, Index1And Index2It is two similarity measurements figureofmerits of depth characteristic information;
The middle reference and distorted image that step (8) obtains step (5) extract edge and textural characteristics respectively;
Process of convolution is carried out by Prewitt operator and by altimetric image, obtains the gradient map comprising edge contour information, is utilized
The expression formula of the marginal information feature of Prewitt operator extraction middle reference and distorted image is as follows:
Wherein, f (x, y) is the left/right view of stereo pairs,For convolution algorithm, hxAnd hyBe 3 × 3 Prewitt it is vertical
Template and horizontal shuttering are respectively intended to the horizontal edge and vertical edge of detection image, and template expression formula is as follows:
The extraction of texture information feature uses local binary patterns LBP, and the expression formula of LBP is as follows:
Wherein, gcIt is the gray value of the central pixel point of image, gpIt is the gray value of the neighbor pixel of image, during x and y are represented
The coordinate value of imago vegetarian refreshments, sgn (x) are jump functions;
The view that the visual information feature and step (6) for the middle reference and distorted image that step (9) extracts step (8) are established
Feel the multiplication that Saliency maps are put pixel-by-pixel, obtain the visual information feature of vision significance enhancing, expression is as follows:
GMSR(x, y)=GMR(x,y)*Sf(x,y) GMSD(x, y)=GMD(x,y)*Sf(x,y) (9-1)
TISR(x, y)=TIR(x,y)*Sf(x,y) TISD(x, y)=TID(x,y)*Sf(x,y) (9-2)
ISR(x, y)=IR(x,y)*Sf(x,y) ISD(x, y)=ID(x,y)*Sf(x,y) (9-3)
Wherein, GMR、TIRAnd IRIt is the edge, texture and brightness information of middle reference image, GM respectivelyD、TIDAnd IDRespectively
It is the edge, texture and brightness information of intermediate distorted image;SfVision significance after the integration obtained for step (6)
Figure;
Step (10) carries out similarity measurement to the visual information feature for the conspicuousness enhancing extracted in step (9), and expression formula is such as
Under:
Wherein, GMSR、TISRAnd ISRIndicate edge, texture and the luminance information feature of the conspicuousness enhancing of middle reference image,
GMSD、TISDAnd ISDIndicate edge, texture and the luminance information of the conspicuousness enhancing of intermediate distorted image, Index3、Index4With
Index5Respectively represent the similarity measurements figureofmerit at edge, texture and luminance information feature, C4It is a constant greater than zero,
Purpose is that prevent denominator be zero;
The middle reference and distorted image that step (11) obtains step (5) carry out down-sampled processing, obtain in L scale
Between refer to and distorted image, to the middle reference and the same applying step of distorted image (6), (9) and (10) under L scale space
Method carry out the foundation of vision significance figure, the extraction of visual signature and similarity measurement, N number of similarity measurement is always obtained
Index, then N=2L+2;
Down-sampled method is as follows: input piece image, filtering image is obtained by low-pass filter, then to filtering image into
Row decimation factor be m down-sampled processing, obtain it is down-sampled after image;
Each Measure Indexes obtained in step (12) integration step (8) and (11) are supported vector machine SVR training prediction,
Optimum prediction model is obtained, and is mapped as the objective assessment score of picture quality.
2. view-based access control model notable feature according to claim 1 is extracted complete with reference to the stereo image quality side of objectively evaluating
Method, it is characterised in that the step (3) includes the following steps:
Step (3.1) will be referred to respectively and the right view all pixels point level of distortion stereo image pair moves to right k times, every time
Mobile step-length is s pixel, the k width amendment right view I after obtaining horizontal move to rightR((x+i*s), y), i=1,2 ... k, often
One width correct right view it is corresponding marked as i, i=1,2 ... k;
Step (3.2) calculates separately the left view of stereo image pair using structural similarity algorithm SSIM and k width corrects right view
The structural similarity of figure obtains k width structural similarity figure, and SSIM algorithm expression formula is as follows:
Wherein, μxAnd μyIt respectively indicates in the left view and amendment right view image of stereo pairs in a corresponding image block
Mean value;σxAnd σyIt respectively indicates in the left view and amendment right view image of stereo pairs in a corresponding image block
Variance yields;σxyThe association side in covariance between the left view of stereo pairs and an image block of amendment right view image
Difference, C1And C2For the constant greater than zero, preventing denominator is zero;
Step (3.3) takes partial structurtes in its k width structural similarity figure similar each pixel (x, y) of left view
Property the maximum width of value, corresponding marked as i, i=1,2 ... k, then i*s is the corresponding parallax value of (x, y) pixel, be recorded as d,
To construct disparity map D.
3. view-based access control model notable feature according to claim 1 is extracted complete with reference to the stereo image quality side of objectively evaluating
Method, it is characterised in that the step (6) specifically:
For vision significance figure extracting method using the vision significance model (SR) of spectrum residual error, particular content is as follows:
Given piece image I (x, y), has:
Wherein, F () and F-1() is two-dimensional Fourier transform and its inverse transformation, and Re () expression takes real part operation, and Angle () is indicated
Argument operation is taken, S (x, y) is the Saliency maps obtained by composing residual error method, and g (x, y) is gauss low frequency filter, hn(f)
It is local mean value filter, expression formula is as follows:
Wherein, σ is the standard deviation in probability distribution;
Vision significance figure is referred to and is distorted by the method for composing residual error to middle reference and distorted image, it is shown as the following formula
Method establish integration after vision significance figure;
Sf(x, y)=Max [Sref(x,y),Sdis(x,y)] (6-4)
Wherein, SrefAnd SdisThe respectively vision significance figure of middle reference and distorted image, SfFor the vision significance after integration
Figure.
4. view-based access control model notable feature according to claim 1 is extracted complete with reference to the stereo image quality side of objectively evaluating
Method, it is characterised in that be supported vector machine SVR training prediction in the step (12), it is specific to obtain optimum prediction model
Are as follows:
SVR training prediction technique specifically uses 5- folding cross validation to train and test model, concrete scheme are as follows:
Sample is randomly divided into mutually disjoint five parts by step (12.1), and it is best to obtain to select wherein four parts of progress SVR training
Then remaining portion is applied to the model up by model, corresponding objective quality value is obtained to carry out in advance subjective quality
It surveys;
Step (12.2) repeats the operation of step (12.1) repeatedly, and all data result average values is taken to be mentioned to characterize
The performance of model out;
Expression is as follows:
Q=SVR (Index1,Index2,…,Indexn) (12-1)
Wherein, Q is evaluating objective quality score.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710721546.XA CN107578404B (en) | 2017-08-22 | 2017-08-22 | View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710721546.XA CN107578404B (en) | 2017-08-22 | 2017-08-22 | View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107578404A CN107578404A (en) | 2018-01-12 |
CN107578404B true CN107578404B (en) | 2019-11-15 |
Family
ID=61034182
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710721546.XA Active CN107578404B (en) | 2017-08-22 | 2017-08-22 | View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107578404B (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108335289A (en) * | 2018-01-18 | 2018-07-27 | 天津大学 | A kind of full image method for evaluating objective quality with reference to fusion |
CN108520510B (en) * | 2018-03-19 | 2021-10-19 | 天津大学 | No-reference stereo image quality evaluation method based on overall and local analysis |
CN108629763B (en) * | 2018-04-16 | 2022-02-01 | 海信集团有限公司 | Disparity map judging method and device and terminal |
CN108449596B (en) * | 2018-04-17 | 2020-09-01 | 福州大学 | 3D stereoscopic image quality evaluation method integrating aesthetics and comfort |
CN108648180B (en) * | 2018-04-20 | 2020-11-17 | 浙江科技学院 | Full-reference image quality objective evaluation method based on visual multi-feature depth fusion processing |
CN109255358B (en) * | 2018-08-06 | 2021-03-26 | 浙江大学 | 3D image quality evaluation method based on visual saliency and depth map |
CN109345502B (en) * | 2018-08-06 | 2021-03-26 | 浙江大学 | Stereo image quality evaluation method based on disparity map stereo structure information extraction |
CN109242834A (en) * | 2018-08-24 | 2019-01-18 | 浙江大学 | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method |
CN109345552A (en) * | 2018-09-20 | 2019-02-15 | 天津大学 | Stereo image quality evaluation method based on region weight |
CN109523506B (en) * | 2018-09-21 | 2021-03-26 | 浙江大学 | Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement |
CN109257593B (en) * | 2018-10-12 | 2020-08-18 | 天津大学 | Immersive virtual reality quality evaluation method based on human eye visual perception process |
CN109525838A (en) * | 2018-11-28 | 2019-03-26 | 上饶师范学院 | Stereo image quality evaluation method based on binocular competition |
CN109872305B (en) * | 2019-01-22 | 2020-08-18 | 浙江科技学院 | No-reference stereo image quality evaluation method based on quality map generation network |
CN109714593A (en) * | 2019-01-31 | 2019-05-03 | 天津大学 | Three-dimensional video quality evaluation method based on binocular fusion network and conspicuousness |
CN111598826B (en) * | 2019-02-19 | 2023-05-02 | 上海交通大学 | Picture objective quality evaluation method and system based on combined multi-scale picture characteristics |
CN110084782B (en) * | 2019-03-27 | 2022-02-01 | 西安电子科技大学 | Full-reference image quality evaluation method based on image significance detection |
CN110428399B (en) * | 2019-07-05 | 2022-06-14 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and storage medium for detecting image |
CN110399881B (en) * | 2019-07-11 | 2021-06-01 | 深圳大学 | End-to-end quality enhancement method and device based on binocular stereo image |
CN111738270B (en) * | 2020-08-26 | 2020-11-13 | 北京易真学思教育科技有限公司 | Model generation method, device, equipment and readable storage medium |
CN112233089B (en) * | 2020-10-14 | 2022-10-25 | 西安交通大学 | No-reference stereo mixed distortion image quality evaluation method |
CN112508847A (en) * | 2020-11-05 | 2021-03-16 | 西安理工大学 | Image quality evaluation method based on depth feature and structure weighted LBP feature |
CN116563193A (en) * | 2022-01-27 | 2023-08-08 | 华为技术有限公司 | Image similarity measurement method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105744256A (en) * | 2016-03-31 | 2016-07-06 | 天津大学 | Three-dimensional image quality objective evaluation method based on graph-based visual saliency |
CN105825503A (en) * | 2016-03-10 | 2016-08-03 | 天津大学 | Visual-saliency-based image quality evaluation method |
CN105959684A (en) * | 2016-05-26 | 2016-09-21 | 天津大学 | Stereo image quality evaluation method based on binocular fusion |
CN106780476A (en) * | 2016-12-29 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic |
CN106920232A (en) * | 2017-02-22 | 2017-07-04 | 武汉大学 | Gradient similarity graph image quality evaluation method and system based on conspicuousness detection |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9412024B2 (en) * | 2013-09-13 | 2016-08-09 | Interra Systems, Inc. | Visual descriptors based video quality assessment using outlier model |
-
2017
- 2017-08-22 CN CN201710721546.XA patent/CN107578404B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825503A (en) * | 2016-03-10 | 2016-08-03 | 天津大学 | Visual-saliency-based image quality evaluation method |
CN105744256A (en) * | 2016-03-31 | 2016-07-06 | 天津大学 | Three-dimensional image quality objective evaluation method based on graph-based visual saliency |
CN105959684A (en) * | 2016-05-26 | 2016-09-21 | 天津大学 | Stereo image quality evaluation method based on binocular fusion |
CN106780476A (en) * | 2016-12-29 | 2017-05-31 | 杭州电子科技大学 | A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic |
CN106920232A (en) * | 2017-02-22 | 2017-07-04 | 武汉大学 | Gradient similarity graph image quality evaluation method and system based on conspicuousness detection |
Also Published As
Publication number | Publication date |
---|---|
CN107578404A (en) | 2018-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578404B (en) | View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images | |
CN110363858B (en) | Three-dimensional face reconstruction method and system | |
CN107578403B (en) | The stereo image quality evaluation method for instructing binocular view to merge based on gradient information | |
CN102750695B (en) | Machine learning-based stereoscopic image quality objective assessment method | |
CN109886870B (en) | Remote sensing image fusion method based on dual-channel neural network | |
CN109523506B (en) | Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement | |
CN109255358B (en) | 3D image quality evaluation method based on visual saliency and depth map | |
CN106023230B (en) | A kind of dense matching method of suitable deformation pattern | |
CN108734776A (en) | A kind of three-dimensional facial reconstruction method and equipment based on speckle | |
CN105205858A (en) | Indoor scene three-dimensional reconstruction method based on single depth vision sensor | |
CN105744256A (en) | Three-dimensional image quality objective evaluation method based on graph-based visual saliency | |
CN104079914B (en) | Based on the multi-view image ultra-resolution method of depth information | |
CN109345502B (en) | Stereo image quality evaluation method based on disparity map stereo structure information extraction | |
CN107392950A (en) | A kind of across yardstick cost polymerization solid matching method based on weak skin texture detection | |
CN102982334B (en) | The sparse disparities acquisition methods of based target edge feature and grey similarity | |
CN109242834A (en) | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method | |
CN108052909B (en) | Thin fiber cap plaque automatic detection method and device based on cardiovascular OCT image | |
CN106709504A (en) | Detail-preserving high fidelity tone mapping method | |
CN107590444A (en) | Detection method, device and the storage medium of static-obstacle thing | |
CN107360416A (en) | Stereo image quality evaluation method based on local multivariate Gaussian description | |
CN104853175B (en) | Novel synthesized virtual viewpoint objective quality evaluation method | |
CN109345552A (en) | Stereo image quality evaluation method based on region weight | |
CN110691236B (en) | Panoramic video quality evaluation method | |
CN109409413B (en) | Automatic classification method for X-ray breast lump images | |
CN107483918B (en) | It is complete with reference to stereo image quality evaluation method based on conspicuousness |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |