CN108460752B - Objective evaluation method for quality of asymmetric multi-distortion stereo image - Google Patents

Objective evaluation method for quality of asymmetric multi-distortion stereo image Download PDF

Info

Publication number
CN108460752B
CN108460752B CN201711380389.7A CN201711380389A CN108460752B CN 108460752 B CN108460752 B CN 108460752B CN 201711380389 A CN201711380389 A CN 201711380389A CN 108460752 B CN108460752 B CN 108460752B
Authority
CN
China
Prior art keywords
image
vector
sub
block
image quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711380389.7A
Other languages
Chinese (zh)
Other versions
CN108460752A (en
Inventor
邵枫
高影
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Fengzhi Test Instrument Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201711380389.7A priority Critical patent/CN108460752B/en
Publication of CN108460752A publication Critical patent/CN108460752A/en
Application granted granted Critical
Publication of CN108460752B publication Critical patent/CN108460752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses an objective evaluation method for quality of an asymmetric multi-distortion stereo image, which comprises the steps of obtaining a sparse coefficient matrix of each sub-block in the local phase image and the local amplitude image of a tested stereo image through optimization according to an image characteristic dictionary table of the local phase image and the local amplitude image under different distortion types constructed in a training stage, constructing an image quality vector of each sub-block in the local phase image and the local amplitude image under different distortion types through the sparse coefficient matrix and the training stage, and finally obtaining an objective evaluation predicted value of the image quality of the tested stereo image through multi-distortion fusion, local global fusion, left-right viewpoint fusion and phase amplitude fusion of the sparse coefficient matrix and the image quality vector, wherein the objective evaluation predicted value and a subjective evaluation value keep better consistency; in the testing stage, an image characteristic dictionary table and an image quality dictionary table do not need to be calculated, so that the complex machine learning training process is avoided.

Description

Objective evaluation method for quality of asymmetric multi-distortion stereo image
Technical Field
The invention relates to an image quality evaluation method, in particular to an objective evaluation method for the quality of an asymmetric multi-distortion stereo image.
Background
With the rapid development of image coding and display technologies, image quality evaluation research has become a very important link. The objective image quality evaluation method is a subjective image quality evaluation method that is capable of automatically evaluating image quality using a computer, and that is designed to keep the objective image quality evaluation result as consistent as possible with the subjective evaluation result, thereby eliminating the time and cost. According to the reference and the dependence degree of the original image, the image quality objective evaluation method can be divided into three categories: a Full Reference (FR) image quality evaluation method, a partial Reference (RR) image quality evaluation method, and a No Reference (NR) image quality evaluation method.
The no-reference image quality evaluation method has higher flexibility because of no need of any reference image information, and therefore receives more and more extensive attention. At present, an existing no-reference image quality evaluation method predicts an evaluation model through machine learning, but the calculation complexity is high, and a training model needs to predict subjective evaluation values of evaluation images, so that the method is not suitable for practical application occasions and has certain limitations. Particularly, for the objective evaluation problem of the quality of the asymmetric multi-distortion stereo image, the existing objective evaluation method aiming at the quality of the single-viewpoint multi-distortion stereo image or the existing objective evaluation method aiming at the quality of the single-distortion stereo image cannot be directly applied, so that the technical problem which needs to be mainly solved in the objective evaluation research of the quality of the asymmetric multi-distortion stereo image is that how to construct a dictionary capable of reflecting the characteristics of the multi-distortion stereo image, how to construct a dictionary capable of reflecting the quality of the multi-distortion stereo image, and how to establish the relation between the dictionary capable of reflecting the characteristics of the multi-distortion stereo image and the dictionary capable of reflecting.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an objective evaluation method for quality of an asymmetric multi-distortion stereo image, which can effectively improve the correlation between objective evaluation results and subjective perception, has low calculation complexity and does not need to predict subjective evaluation values of tested stereo images.
The technical scheme adopted by the invention for solving the technical problems is as follows: an objective evaluation method for quality of asymmetric multi-distortion stereo images is characterized by comprising a training stage and a testing stage;
the specific steps of the training phase process are as follows:
① _1, selecting N original undistorted stereo images with width W and height H, then respectively performing JPEG distortion, Gaussian blur distortion and Gaussian white noise distortion of L different distortion intensities on each original undistorted stereo image to obtain JPEG distorted stereo images of L distortion intensities, Gaussian blur distorted stereo images of L distortion intensities and Gaussian white noise distorted stereo images of L distortion intensities corresponding to each original undistorted stereo image, and then forming a first training image set by all original undistorted stereo images and JPEG distorted stereo images of L distortion intensities corresponding to each original undistorted stereo image, and recording the first training image set as a first training image set
Figure BDA0001515476950000021
And all original undistorted stereo images and L corresponding Gaussian blur distorted stereo images with distortion intensity form a second training image set which is recorded as
Figure BDA0001515476950000022
All original undistorted stereo images and L distortion intensity Gaussian white noise distorted stereo images corresponding to the original undistorted stereo images form a third training image set, and the third training image set is recorded as
Figure BDA0001515476950000023
Wherein N is>1,L>1,
Figure BDA0001515476950000024
To represent
Figure BDA0001515476950000025
Figure BDA0001515476950000026
And
Figure BDA0001515476950000027
the u-th original undistorted stereo image in (a),
Figure BDA0001515476950000028
to represent
Figure BDA0001515476950000029
The distorted stereo image of the v-th distortion intensity corresponding to the original undistorted stereo image in (1),
Figure BDA00015154769500000210
to represent
Figure BDA00015154769500000211
The distorted stereo image of the v-th distortion intensity corresponding to the original undistorted stereo image in (1),
Figure BDA00015154769500000212
to represent
Figure BDA00015154769500000213
The distortion stereo image with the v distortion intensity corresponding to the u original distortion-free stereo image;
① _2, respectively obtaining by 6 different full reference image quality evaluation methods
Figure DA00015154769537070
Figure BDA00015154769500000214
And
Figure BDA00015154769500000215
objective evaluation prediction value of each distorted stereo image; then will be
Figure BDA00015154769500000216
The 6 objective evaluation predicted values of each distorted three-dimensional image form the image quality vector of the distorted three-dimensional image in sequence, and the image quality vector of the distorted three-dimensional image is obtained
Figure BDA0001515476950000031
The 6 objective evaluation predicted values of each distorted three-dimensional image form the image quality vector of the distorted three-dimensional image in sequence, and the image quality vector of the distorted three-dimensional image is obtained
Figure BDA0001515476950000032
The 6 objective evaluation predicted values of each distorted three-dimensional image form an image quality vector of the distorted three-dimensional image in sequence;
① _3, will
Figure BDA0001515476950000033
The image quality vectors and the average subjective score difference value of all the distorted stereo images form a first training sample data set; then, a support vector regression is adopted as a machine learning method to train all image quality vectors in the first training sample data set, so that regression function values and subjects obtained through trainingThe error between the quality recommendation values is minimum, and the optimal weight vector is obtained through fitting
Figure BDA0001515476950000034
And an optimal bias term
Figure BDA0001515476950000035
Then use
Figure BDA0001515476950000036
And
Figure BDA0001515476950000037
constructing a first quality prediction model, denoted as g1(y1),
Figure BDA0001515476950000038
Wherein, g1() In the form of a function, y1For representing an image quality vector, and as an input vector to a first quality prediction model,
Figure BDA0001515476950000039
is composed of
Figure BDA00015154769500000310
The transpose of (a) is performed,
Figure BDA00015154769500000311
is y1A linear function of (a);
also, will
Figure BDA00015154769500000312
The image quality vectors and the average subjective score difference values of all the distorted stereo images form a second training sample data set; then, a support vector regression is adopted as a machine learning method to train all image quality vectors in the second training sample data set, so that the error between a regression function value obtained through training and a subjective quality recommended value is minimum, and an optimal weight vector is obtained through fitting
Figure BDA00015154769500000313
And an optimal bias term
Figure BDA00015154769500000314
Then use
Figure BDA00015154769500000315
And
Figure BDA00015154769500000316
constructing a second quality prediction model, denoted as g2(y2),
Figure BDA00015154769500000317
Wherein, g2() In the form of a function, y2For representing an image quality vector, and as an input vector to a second quality prediction model,
Figure BDA00015154769500000318
is composed of
Figure BDA00015154769500000319
The transpose of (a) is performed,
Figure BDA00015154769500000320
is y2A linear function of (a);
also, will
Figure BDA00015154769500000321
The image quality vectors and the average subjective score difference values of all the distorted stereo images form a third training sample data set; then, a support vector regression is adopted as a machine learning method to train all image quality vectors in the third training sample data set, so that the error between a regression function value obtained through training and a subjective quality recommended value is minimum, and an optimal weight vector is obtained through fitting
Figure BDA00015154769500000322
And an optimal bias term
Figure BDA00015154769500000323
Then use
Figure BDA00015154769500000324
And
Figure BDA00015154769500000325
constructing a third quality prediction model, denoted as g3(y3),
Figure BDA0001515476950000041
Wherein, g3() In the form of a function, y3For representing an image quality vector, and as an input vector to a third quality prediction model,
Figure BDA0001515476950000042
is composed of
Figure BDA0001515476950000043
The transpose of (a) is performed,
Figure BDA0001515476950000044
is y3A linear function of (a);
① _4, calculation
Figure BDA0001515476950000045
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure BDA0001515476950000046
Will be the local phase image and the local amplitude image of each distorted stereo image
Figure BDA0001515476950000047
The local phase image and the local amplitude image are associated as
Figure BDA0001515476950000048
And
Figure BDA0001515476950000049
then will be
Figure BDA00015154769500000410
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure BDA00015154769500000411
And will be
Figure BDA00015154769500000412
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure BDA00015154769500000413
Also, calculate
Figure BDA00015154769500000414
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure BDA00015154769500000415
Will be the local phase image and the local amplitude image of each distorted stereo image
Figure BDA00015154769500000416
The local phase image and the local amplitude image are associated as
Figure BDA00015154769500000417
And
Figure BDA00015154769500000418
then will be
Figure BDA00015154769500000419
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure BDA00015154769500000420
And will be
Figure BDA00015154769500000421
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure BDA00015154769500000422
Also, calculate
Figure BDA00015154769500000423
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure BDA00015154769500000424
Will be the local phase image and the local amplitude image of each distorted stereo image
Figure BDA00015154769500000425
The local phase image and the local amplitude image are associated as
Figure BDA00015154769500000426
And
Figure BDA00015154769500000427
then will be
Figure BDA00015154769500000428
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure BDA00015154769500000429
And will be
Figure BDA00015154769500000430
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure BDA00015154769500000431
① _5, will
Figure BDA00015154769500000432
Each local phase image of (1) and
Figure BDA0001515476950000051
each local amplitude image in (1) is divided into
Figure BDA0001515476950000052
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure BDA0001515476950000053
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA0001515476950000054
The image feature vector formed by the pixel values of all the pixel points in the kth sub-block in all the local phase images in sequence is recorded as
Figure BDA0001515476950000055
And will be
Figure BDA0001515476950000056
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA0001515476950000057
The image feature vector formed by the pixel values of all the pixel points in the k-th sub-block in all the local amplitude images in sequence is recorded as
Figure BDA0001515476950000058
Then will be
Figure BDA0001515476950000059
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure BDA00015154769500000510
And will be
Figure BDA00015154769500000511
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure BDA00015154769500000512
Wherein, the symbol
Figure BDA00015154769500000513
Is a sign of a lower rounding operation, k is more than or equal to 1 and less than or equal to M,
Figure BDA00015154769500000514
Figure BDA00015154769500000515
and
Figure BDA00015154769500000516
the dimensions of (A) are all 64 x 1;
also, will
Figure BDA00015154769500000517
Each local phase image of (1) and
Figure BDA00015154769500000518
each local amplitude image in (1) is divided into
Figure BDA00015154769500000519
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure BDA00015154769500000520
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA00015154769500000521
All parts ofThe image feature vector formed by the pixel values of all the pixel points in the kth sub-block in the phase image in sequence is recorded as
Figure BDA00015154769500000522
And will be
Figure BDA00015154769500000523
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA00015154769500000524
The image feature vector formed by the pixel values of all the pixel points in the k-th sub-block in all the local amplitude images in sequence is recorded as
Figure BDA00015154769500000525
Then will be
Figure BDA00015154769500000526
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure BDA0001515476950000061
And will be
Figure BDA0001515476950000062
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure BDA0001515476950000063
Wherein the content of the first and second substances,
Figure BDA0001515476950000064
and
Figure BDA0001515476950000065
the dimensions of (A) are all 64 x 1;
also, will
Figure BDA0001515476950000066
Each local phase image of (1) and
Figure BDA0001515476950000067
each local amplitude image in (1) is divided into
Figure BDA0001515476950000068
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure BDA0001515476950000069
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA00015154769500000610
The image feature vector formed by the pixel values of all the pixel points in the kth sub-block in all the local phase images in sequence is recorded as
Figure BDA00015154769500000611
And will be
Figure BDA00015154769500000612
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA00015154769500000613
The image feature vector formed by the pixel values of all the pixel points in the k-th sub-block in all the local amplitude images in sequence is recorded as
Figure BDA00015154769500000614
Then will be
Figure BDA00015154769500000615
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure BDA00015154769500000616
And will be
Figure BDA00015154769500000617
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure BDA00015154769500000618
Wherein the content of the first and second substances,
Figure BDA00015154769500000619
and
Figure BDA00015154769500000620
the dimensions of (A) are all 64 x 1;
① _6, will
Figure BDA00015154769500000621
Each distorted stereo image division in (1)
Figure BDA00015154769500000622
Sub-blocks with size of 8 × 8 and not overlapped with each other; then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure BDA00015154769500000623
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure BDA00015154769500000624
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure BDA00015154769500000625
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y1,k(ii) a Then will be
Figure BDA0001515476950000071
All of the distortions inThe set of image quality vectors of all the subblocks in the stereoscopic image is denoted by { y }1,kL 1 is more than or equal to k and less than or equal to M }; wherein, y1,kHas a dimension of 6 × 1;
also, will
Figure BDA0001515476950000072
Each distorted stereo image division in (1)
Figure BDA0001515476950000073
Sub-blocks with size of 8 × 8 and not overlapped with each other; then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure BDA0001515476950000074
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure BDA0001515476950000075
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure BDA0001515476950000076
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y2,k(ii) a Then will be
Figure BDA0001515476950000077
The set of image quality vectors of all the subblocks in all the distorted stereoscopic images in (1) is denoted by { y }2,kL 1 is more than or equal to k and less than or equal to M }; wherein, y2,kHas a dimension of 6 × 1;
also, will
Figure BDA0001515476950000078
Each distorted stereo image division in (1)
Figure BDA0001515476950000079
Sub-blocks with size of 8 × 8 and not overlapped with each other;then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure BDA00015154769500000710
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure BDA00015154769500000711
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure BDA00015154769500000712
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y3k(ii) a Then will be
Figure BDA00015154769500000713
The set of image quality vectors of all the subblocks in all the distorted stereoscopic images in (1) is denoted by { y }3,kL 1 is more than or equal to k and less than or equal to M }; wherein, y3,kHas a dimension of 6 × 1;
① _7, pair of bags by K-SVD method
Figure BDA00015154769500000714
Figure BDA00015154769500000715
And
Figure BDA00015154769500000716
performing joint dictionary training operation on the formed set to obtain a structure
Figure BDA00015154769500000717
Respective image feature dictionary table and image quality dictionary table, corresponding
Figure BDA0001515476950000081
And
Figure BDA0001515476950000082
wherein the content of the first and second substances,
Figure BDA0001515476950000083
and
Figure BDA0001515476950000084
the dimensions of (a) are all 64 x K,
Figure BDA0001515476950000085
and
Figure BDA0001515476950000086
the dimensions of the dictionary are 6 XK, K represents the number of the set dictionaries, and K is more than or equal to 1;
also, the K-SVD method is adopted to pair
Figure BDA0001515476950000087
Figure BDA0001515476950000088
And
Figure BDA0001515476950000089
performing joint dictionary training operation on the formed set to obtain a structure
Figure BDA00015154769500000810
And
Figure BDA00015154769500000811
respective image feature dictionary table and image quality dictionary table, corresponding
Figure BDA00015154769500000812
Figure BDA00015154769500000813
And
Figure BDA00015154769500000814
wherein the content of the first and second substances,
Figure BDA00015154769500000815
and
Figure BDA00015154769500000816
the dimensions of (a) are all 64 x K,
Figure BDA00015154769500000817
and
Figure BDA00015154769500000818
the dimensions of (A) are all 6 XK;
the specific steps of the test phase process are as follows:
② _1, for any test stereo image S with width W' and height HtestWill StestIs recorded as LtestWill StestIs recorded as Rtest(ii) a Wherein W 'is the same as or different from W, and H' is the same as or different from H;
② _2, obtaining S in the same operation as step ① _4test、LtestAnd RtestRespective local phase image and local amplitude image, and converting L into LtestThe local phase image and the local amplitude image are associated as
Figure BDA00015154769500000819
And
Figure BDA00015154769500000820
r is to betestThe local phase image and the local amplitude image are associated as
Figure BDA00015154769500000821
And
Figure BDA00015154769500000822
② _3, will
Figure BDA00015154769500000823
And
Figure BDA00015154769500000824
are respectively divided into
Figure BDA00015154769500000825
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure BDA00015154769500000826
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure BDA00015154769500000827
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure BDA00015154769500000828
And will be
Figure BDA00015154769500000829
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure BDA00015154769500000830
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure BDA00015154769500000831
Will be provided with
Figure BDA00015154769500000832
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure BDA0001515476950000091
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure BDA0001515476950000092
Will be provided with
Figure BDA0001515476950000093
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure BDA0001515476950000094
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure BDA0001515476950000095
Then will be
Figure BDA0001515476950000096
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure BDA0001515476950000097
And will be
Figure BDA0001515476950000098
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure BDA0001515476950000099
Will be provided with
Figure BDA00015154769500000910
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure BDA00015154769500000911
Will be provided with
Figure BDA00015154769500000912
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure BDA00015154769500000913
Wherein the content of the first and second substances,
Figure BDA00015154769500000914
Figure BDA00015154769500000915
and
Figure BDA00015154769500000916
the dimensions of (A) are all 64 x 1;
② _4 constructed according to the procedure in the training phase
Figure BDA00015154769500000917
Separately optimized reconstruction
Figure BDA00015154769500000918
And
Figure BDA00015154769500000919
a first sparse coefficient matrix for each respective image feature vector
Figure BDA00015154769500000920
Is recorded as a first sparse coefficient matrix
Figure BDA00015154769500000921
Figure BDA00015154769500000922
Is solved by adopting a K-SVD method
Figure BDA00015154769500000923
Obtained by
Figure BDA00015154769500000924
Is recorded as a first sparse coefficient matrix
Figure BDA00015154769500000925
Figure BDA00015154769500000926
Is solved by adopting a K-SVD method
Figure BDA00015154769500000927
Obtaining;
also, in the same manner as above,constructed according to a process during a training phase
Figure BDA00015154769500000928
Separately optimized reconstruction
Figure BDA00015154769500000929
And
Figure BDA00015154769500000930
a second sparse coefficient matrix for each respective image feature vector
Figure BDA00015154769500000931
Is recorded as a second sparse coefficient matrix
Figure BDA00015154769500000932
Figure BDA00015154769500000933
Is solved by adopting a K-SVD method
Figure BDA00015154769500000934
Obtained by
Figure BDA00015154769500000935
Is recorded as a second sparse coefficient matrix
Figure BDA00015154769500000936
Figure BDA00015154769500000937
Is solved by adopting a K-SVD method
Figure BDA0001515476950000101
Obtaining;
also constructed from the process during the training phase
Figure BDA0001515476950000102
Separately optimized reconstruction
Figure BDA0001515476950000103
And
Figure BDA0001515476950000104
a third sparse coefficient matrix for each respective image feature vector
Figure BDA0001515476950000105
Is recorded as a third sparse coefficient matrix
Figure BDA0001515476950000106
Figure BDA0001515476950000107
Is solved by adopting a K-SVD method
Figure BDA0001515476950000108
Obtained by
Figure BDA0001515476950000109
Is recorded as a third sparse coefficient matrix
Figure BDA00015154769500001010
Figure BDA00015154769500001011
Is solved by adopting a K-SVD method
Figure BDA00015154769500001012
Obtaining;
also constructed from the process during the training phase
Figure BDA00015154769500001013
Separately optimized reconstruction
Figure BDA00015154769500001014
And
Figure BDA00015154769500001015
a first sparse coefficient matrix for each respective image feature vector
Figure BDA00015154769500001016
Is recorded as a first sparse coefficient matrix
Figure BDA00015154769500001017
Figure BDA00015154769500001018
Is solved by adopting a K-SVD method
Figure BDA00015154769500001019
Obtained by
Figure BDA00015154769500001020
Is recorded as a first sparse coefficient matrix
Figure BDA00015154769500001021
Figure BDA00015154769500001022
Is solved by adopting a K-SVD method
Figure BDA00015154769500001023
Obtaining;
also constructed from the process during the training phase
Figure BDA00015154769500001024
Separately optimized reconstruction
Figure BDA00015154769500001025
And
Figure BDA00015154769500001026
a second sparse coefficient matrix for each respective image feature vector
Figure BDA00015154769500001027
Is expressed as a sparse coefficient matrix
Figure BDA00015154769500001028
Figure BDA00015154769500001029
Is solved by adopting a K-SVD method
Figure BDA00015154769500001030
Obtained by
Figure BDA00015154769500001031
Is recorded as a second sparse coefficient matrix
Figure BDA00015154769500001032
Figure BDA00015154769500001033
Is solved by adopting a K-SVD method
Figure BDA00015154769500001034
Obtaining;
also constructed from the process during the training phase
Figure BDA00015154769500001035
Separately optimized reconstruction
Figure BDA00015154769500001036
And
Figure BDA0001515476950000111
a third sparse coefficient matrix for each respective image feature vector
Figure BDA0001515476950000112
Is recorded as a third sparse coefficient matrix
Figure BDA0001515476950000113
Figure BDA0001515476950000114
Is solved by adopting a K-SVD method
Figure BDA0001515476950000115
Obtained by
Figure BDA0001515476950000116
Is recorded as a third sparse coefficient matrix
Figure BDA0001515476950000117
Figure BDA0001515476950000118
Is solved by adopting a K-SVD method
Figure BDA0001515476950000119
Obtaining;
wherein the content of the first and second substances,
Figure BDA00015154769500001110
and
Figure BDA00015154769500001111
all dimensions of (a) are Kx 1, min () is a minimum function, and the symbol "| | | | non-conducting filamentF"is a Flobenius norm-norm symbol for solving the matrix, the symbol" | | | | | | luminance1"is the 1-norm symbol of the matrix, λ is the Lagrange parameter;
② _5 constructed according to the procedure in the training phase
Figure BDA00015154769500001112
Estimate separately
Figure BDA00015154769500001113
And
Figure BDA00015154769500001114
the first image quality vector of each sub-block in each will
Figure BDA00015154769500001115
The first image quality vector of the t sub-block of (1) is noted
Figure BDA00015154769500001116
Figure BDA00015154769500001117
Will be provided with
Figure BDA00015154769500001118
The first image quality vector of the t sub-block of (1) is noted
Figure BDA00015154769500001119
Figure BDA00015154769500001120
Also constructed from the process during the training phase
Figure BDA00015154769500001121
Estimate separately
Figure BDA00015154769500001122
And
Figure BDA00015154769500001123
the second image quality vector of each sub-block in each will
Figure BDA00015154769500001124
The second image quality vector of the t sub-block of (1)
Figure BDA00015154769500001125
Figure BDA00015154769500001126
Will be provided with
Figure BDA00015154769500001127
The second image quality vector of the t sub-block of (1)
Figure BDA00015154769500001128
Figure BDA00015154769500001129
Also constructed from the process during the training phase
Figure BDA00015154769500001130
Estimate separately
Figure BDA00015154769500001131
And
Figure BDA00015154769500001132
the third image quality vector of each sub-block in each
Figure BDA00015154769500001133
The third image quality vector of the t sub-block of (1)
Figure BDA00015154769500001134
Figure BDA00015154769500001135
Will be provided with
Figure BDA00015154769500001136
The third image quality vector of the t sub-block of (1)
Figure BDA00015154769500001137
Figure BDA00015154769500001138
Also constructed from the process during the training phase
Figure BDA00015154769500001139
Estimate separately
Figure BDA00015154769500001140
And
Figure BDA00015154769500001141
the first image quality vector of each sub-block in each will
Figure BDA00015154769500001142
The first image quality vector of the t sub-block of (1) is noted
Figure BDA00015154769500001143
Figure BDA00015154769500001144
Will be provided with
Figure BDA00015154769500001145
The first image quality vector of the t sub-block of (1) is noted
Figure BDA00015154769500001146
Figure BDA00015154769500001147
Also constructed from the process during the training phase
Figure BDA00015154769500001148
Estimate separately
Figure BDA00015154769500001149
And
Figure BDA00015154769500001150
the second image quality vector of each sub-block in each will
Figure BDA00015154769500001151
The second image quality vector of the t sub-block of (1)
Figure BDA00015154769500001152
Figure BDA0001515476950000121
Will be provided with
Figure BDA0001515476950000122
The second image quality vector of the t sub-block of (1)
Figure BDA0001515476950000123
Figure BDA0001515476950000124
Also constructed from the process during the training phase
Figure BDA0001515476950000125
Estimate separately
Figure BDA0001515476950000126
And
Figure BDA0001515476950000127
the third image quality vector of each sub-block in each
Figure BDA0001515476950000128
The third image quality vector of the t sub-block of (1)
Figure BDA0001515476950000129
Figure BDA00015154769500001210
Will be provided with
Figure BDA00015154769500001211
The third image quality vector of the t sub-block of (1)
Figure BDA00015154769500001227
Figure BDA00015154769500001212
Wherein the content of the first and second substances,
Figure BDA00015154769500001213
and
Figure BDA00015154769500001214
the dimensions of (A) are all 6 x 1;
②_6、computing
Figure BDA00015154769500001215
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure BDA00015154769500001216
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure BDA00015154769500001217
And
Figure BDA00015154769500001218
Figure BDA00015154769500001219
wherein exp () represents an exponential function with a natural base e as a base, and the symbol "| | | | purple2"is the 2-norm sign of the matrix, η is the control parameter,
Figure BDA00015154769500001220
is composed of
Figure BDA00015154769500001221
The input vector of (1);
also, calculate
Figure BDA00015154769500001222
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure BDA00015154769500001223
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure BDA00015154769500001224
And
Figure BDA00015154769500001225
Figure BDA00015154769500001226
Figure BDA0001515476950000131
wherein the content of the first and second substances,
Figure BDA0001515476950000132
is composed of
Figure BDA0001515476950000133
The input vector of (1);
also, calculate
Figure BDA0001515476950000134
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure BDA0001515476950000135
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure BDA0001515476950000136
And
Figure BDA0001515476950000137
Figure BDA0001515476950000138
wherein the content of the first and second substances,
Figure BDA0001515476950000139
is composed of
Figure BDA00015154769500001310
The input vector of (1);
also, calculate
Figure BDA00015154769500001311
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure BDA00015154769500001312
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure BDA00015154769500001313
And
Figure BDA00015154769500001314
Figure BDA00015154769500001315
Figure BDA00015154769500001316
wherein the content of the first and second substances,
Figure BDA00015154769500001317
is composed of
Figure BDA00015154769500001318
The input vector of (1);
② _7, calculation
Figure BDA00015154769500001319
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure BDA00015154769500001320
And QL,P
Figure BDA0001515476950000141
Also, calculate
Figure BDA0001515476950000142
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure BDA0001515476950000143
And QR,P
Figure BDA0001515476950000144
Also, calculate
Figure BDA0001515476950000145
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure BDA0001515476950000146
And Q ,LA
Figure BDA0001515476950000147
Also, calculate
Figure BDA0001515476950000148
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure BDA0001515476950000149
And QR,A
Figure BDA00015154769500001410
② _8, according to
Figure BDA00015154769500001411
And
Figure BDA00015154769500001412
and QL,PAnd QR,PCalculating StestThe predicted value of the objective quality evaluation of the local phase image is recorded as QP,QP=ωL,P×QL,PR,P×QR,P(ii) a Wherein, ω isL,PIs QL,PThe weight of (a) is calculated,
Figure BDA00015154769500001413
ωR,Pis QR,PThe weight of (a) is calculated,
Figure BDA00015154769500001414
symbol'<>"is the inner product symbol, C is the control parameter;
also according to
Figure BDA0001515476950000151
And
Figure BDA0001515476950000152
and QL,AAnd QR,ACalculating StestThe predicted value of the objective evaluation of the quality of the local amplitude image is marked as QA,QA=ωL,A×QL,AR,A×QR,A(ii) a Wherein, ω isL,AIs QL,AThe weight of (a) is calculated,
Figure BDA0001515476950000153
ωR,Ais QR,AThe weight of (a) is calculated,
Figure BDA0001515476950000154
② _9, according to QPAnd QACalculating StestThe predicted value of the objective evaluation of image quality is expressed as Q, Q ═ ω (ω)P×(QP)n+(1-ωP)×(QA)n)1/n(ii) a Wherein, ω isPAnd n are weighting parameters.
In the step ① _4, the data is sent,
Figure BDA0001515476950000155
and
Figure BDA0001515476950000156
the acquisition process comprises the following steps:
① _4a, using Log-Gabor filter pairs
Figure BDA0001515476950000157
Each pixel point in the image is filtered to obtain
Figure BDA0001515476950000158
The even symmetric frequency response and the odd symmetric frequency response of each pixel point in different scales and directions will be
Figure BDA0001515476950000159
The even symmetric frequency response of the pixel point with the middle coordinate position (x, y) in different scales and directions is recorded as eα,θ(x, y) is
Figure BDA00015154769500001510
The odd symmetric frequency response of the pixel point with the middle coordinate position (x, y) in different scales and directions is recorded as oα,θ(x, y), wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, α represents the scale factor of the Log-Gabor filter,
Figure BDA00015154769500001511
Figure BDA00015154769500001512
theta denotes a direction factor of the Log-Gabor filter,
Figure BDA00015154769500001513
Figure BDA00015154769500001514
① _4b, calculation
Figure BDA00015154769500001515
The phase consistency characteristics of each pixel point in different directions are
Figure BDA00015154769500001516
The phase consistency characteristics of the pixel points with the (x, y) middle coordinate position in different directions are recorded as PCθ(x,y),
Figure BDA00015154769500001517
Wherein the content of the first and second substances,
Figure BDA00015154769500001518
Figure BDA00015154769500001519
① _4c, according to
Figure BDA0001515476950000161
The direction corresponding to the maximum phase consistency characteristic of each pixel point in the image is calculated
Figure BDA0001515476950000162
The local phase characteristic and the local amplitude characteristic of each pixel point in the image; for the
Figure BDA0001515476950000163
And (3) finding out the maximum phase consistency characteristic of the pixel point with the (x, y) middle coordinate position in the phase consistency characteristics in different directions, finding out the direction corresponding to the maximum phase consistency characteristic, and marking as thetamAgain according to thetamCalculating the local phase characteristic and the local amplitude characteristic of the pixel point, and correspondingly marking as
Figure BDA0001515476950000164
And
Figure BDA0001515476950000165
Figure BDA0001515476950000166
wherein the content of the first and second substances,
Figure BDA0001515476950000167
arctan () is an inverted cosine function,
Figure BDA0001515476950000168
Figure BDA0001515476950000169
to represent
Figure BDA00015154769500001610
The pixel point with the middle coordinate position (x, y) is in the direction theta corresponding to the different scales and the maximum phase consistency characteristics of the pixel pointmThe odd-symmetric frequency response of (a),
Figure BDA00015154769500001611
Figure BDA00015154769500001612
to represent
Figure BDA00015154769500001613
The pixel point with the middle coordinate position (x, y) is in the direction theta corresponding to the different scales and the maximum phase consistency characteristics of the pixel pointmThe even-symmetric frequency response of the frequency domain,
Figure BDA00015154769500001614
① _4d, according to
Figure BDA00015154769500001615
The local phase characteristics of all the pixel points in the image are obtained
Figure BDA00015154769500001616
Local phase image of
Figure BDA00015154769500001617
Also according to
Figure BDA00015154769500001618
Obtaining the local amplitude characteristics of all the pixel points in the image
Figure BDA00015154769500001619
Local amplitude image of
Figure BDA00015154769500001620
Obtaining according to steps ① _4a through ① _4d
Figure BDA00015154769500001621
And
Figure BDA00015154769500001622
in the same manner as the procedure of (1)
Figure BDA00015154769500001623
And
Figure BDA00015154769500001624
and
Figure BDA00015154769500001625
in the step ① _7, the user can,
Figure BDA00015154769500001626
and
Figure BDA00015154769500001627
is solved by adopting a K-SVD method
Figure BDA00015154769500001628
Obtained, where min () is a minimum function, the symbol "| | | | luminanceF"is a Flobenius norm-norm symbol for solving the matrix, the symbol" | | | | | | luminance1"is to find the 1-norm sign of the matrix, s is more than or equal to 1 and less than or equal to 3,
Figure BDA00015154769500001629
Figure BDA00015154769500001630
Figure BDA00015154769500001631
and
Figure BDA00015154769500001632
the dimensions of (a) are all 64 x M,
Figure BDA00015154769500001633
is composed of
Figure BDA00015154769500001634
The 1 st first image feature vector in (a),
Figure BDA00015154769500001635
is composed of
Figure BDA00015154769500001636
The kth first image feature vector in (1),
Figure BDA00015154769500001637
is composed of
Figure BDA00015154769500001638
The mth first image feature vector in (1),
Figure BDA00015154769500001639
is composed of
Figure BDA00015154769500001640
The 1 st first image feature vector in (a),
Figure BDA0001515476950000171
is composed of
Figure BDA0001515476950000172
The kth first image feature vector in (1),
Figure BDA0001515476950000173
is composed of
Figure BDA0001515476950000174
The mth first image feature vector in (1),
Figure BDA0001515476950000175
is composed of
Figure BDA0001515476950000176
The 1 st first image feature vector in (a),
Figure BDA0001515476950000177
is composed of
Figure BDA0001515476950000178
The kth first image feature vector in (1),
Figure BDA0001515476950000179
is composed of
Figure BDA00015154769500001710
M-th first image feature vector, Y1=[y1,1…y1,k…y1,M],Y2=[y2,1…y2,k…y2,M],Y3=[y3,1…y3,k…y3,M],Y1、Y2And Y3All dimensions of (a) are 6 XM, y1,1Is { y1,k1 st image quality vector, y in |1 ≦ k ≦ M }1,kIs { y1,kK-th image quality vector in |1 ≦ k ≦ M ≦ y1,MIs { y1,kMth image quality vector in |1 ≦ k ≦ M ≦ y2,1Is { y2,k1 st image quality vector, y in |1 ≦ k ≦ M }2,kIs { y2,kK-th image quality vector in |1 ≦ k ≦ M ≦ y2,MIs { y2,kMth image quality vector in |1 ≦ k ≦ M ≦ y3,1Is { y3,k1 st image quality vector, y in |1 ≦ k ≦ M }3,kIs { y3,kK-th image quality vector in |1 ≦ k ≦ M ≦ y3,MIs { y3,kThe Mth image quality vector in |1 ≦ k ≦ M },
Figure BDA00015154769500001711
and
Figure BDA00015154769500001712
each of which represents a sparse matrix and each of which represents,
Figure BDA00015154769500001713
Figure BDA00015154769500001714
Figure BDA00015154769500001715
and
Figure BDA00015154769500001716
the dimensions of (A) are all K multiplied by M,
Figure BDA00015154769500001717
is composed of
Figure BDA00015154769500001718
The 1 st column vector of (1),
Figure BDA00015154769500001719
is composed of
Figure BDA00015154769500001720
The k-th column vector of (a),
Figure BDA00015154769500001721
is composed of
Figure BDA00015154769500001722
The M-th column vector of (1),
Figure BDA00015154769500001723
is composed of
Figure BDA00015154769500001724
The 1 st column vector of (1),
Figure BDA00015154769500001725
is composed of
Figure BDA00015154769500001726
The k-th column vector of (a),
Figure BDA00015154769500001727
is composed of
Figure BDA00015154769500001728
The M-th column vector of (1),
Figure BDA00015154769500001729
is composed of
Figure BDA00015154769500001730
The 1 st column vector of (1),
Figure BDA00015154769500001731
is composed of
Figure BDA00015154769500001732
The k-th column vector of (a),
Figure BDA00015154769500001733
is composed of
Figure BDA00015154769500001734
The M-th column vector of (1),
Figure BDA00015154769500001735
the dimension of (A) is K x 1, symbol [ "]]"is a vector representation symbol, γ is a weighting parameter, and λ is a lagrangian parameter;
in the step ① _7, the user can,
Figure BDA00015154769500001736
and
Figure BDA00015154769500001737
is solved by adopting a K-SVD method
Figure BDA00015154769500001738
The process for preparing a novel compound of formula (I),
Figure BDA00015154769500001739
Figure BDA00015154769500001740
and
Figure BDA00015154769500001741
all dimension of (A) are 64M,
Figure BDA00015154769500001742
Is composed of
Figure BDA00015154769500001743
The 1 st second image feature vector in (b),
Figure BDA00015154769500001744
is composed of
Figure BDA0001515476950000181
The kth second image feature vector in (b),
Figure BDA0001515476950000182
is composed of
Figure BDA0001515476950000183
The mth second image feature vector in (1),
Figure BDA0001515476950000184
is composed of
Figure BDA0001515476950000185
The 1 st second image feature vector in (b),
Figure BDA0001515476950000186
is composed of
Figure BDA0001515476950000187
The kth second image feature vector in (b),
Figure BDA0001515476950000188
is composed of
Figure BDA0001515476950000189
The mth second image feature vector in (1),
Figure BDA00015154769500001810
is composed of
Figure BDA00015154769500001811
The 1 st second image feature vector in (b),
Figure BDA00015154769500001812
is composed of
Figure BDA00015154769500001813
The kth second image feature vector in (b),
Figure BDA00015154769500001814
is composed of
Figure BDA00015154769500001815
The mth second image feature vector in (1),
Figure BDA00015154769500001816
and
Figure BDA00015154769500001817
each of which represents a sparse matrix and each of which represents,
Figure BDA00015154769500001818
Figure BDA00015154769500001819
Figure BDA00015154769500001820
and
Figure BDA00015154769500001821
the dimensions of (A) are all K multiplied by M,
Figure BDA00015154769500001822
is composed of
Figure BDA00015154769500001823
The 1 st column vector of (1),
Figure BDA00015154769500001824
is composed of
Figure BDA00015154769500001825
The k-th column vector of (a),
Figure BDA00015154769500001826
is composed of
Figure BDA00015154769500001827
The M-th column vector of (1),
Figure BDA00015154769500001828
is composed of
Figure BDA00015154769500001829
The 1 st column vector of (1),
Figure BDA00015154769500001830
is composed of
Figure BDA00015154769500001831
The k-th column vector of (a),
Figure BDA00015154769500001832
is composed of
Figure BDA00015154769500001833
The M-th column vector of (1),
Figure BDA00015154769500001834
is composed of
Figure BDA00015154769500001835
The 1 st column vector of (1),
Figure BDA00015154769500001836
is composed of
Figure BDA00015154769500001837
The k-th column vector of (a),
Figure BDA00015154769500001838
is composed of
Figure BDA00015154769500001839
The M-th column vector of (1),
Figure BDA00015154769500001840
the dimensions of (A) are each K × 1.
Compared with the prior art, the invention has the advantages that:
1) in the training stage, acquiring JPEG (joint photographic experts group) distorted stereo images, Gaussian fuzzy distorted stereo images and Gaussian white noise distorted stereo images with different distortion intensities of undistorted stereo images, respectively constructing three training image sets, and respectively obtaining image feature dictionary tables and image quality dictionary tables of all local phase images and local amplitude images under different distortion types through joint dictionary training; in the testing stage, the image characteristic dictionary table and the image quality dictionary table do not need to be calculated, so that the complex machine learning training process is avoided, the subjective evaluation value of each tested stereo image does not need to be predicted, the calculation complexity is low, and the method is suitable for practical application occasions.
2) In the testing stage, according to the image characteristic dictionary table of the local phase image and the local amplitude image under different distortion types, which is obtained by construction in the training stage, the method obtains the sparse coefficient matrix of each sub-block in the local phase image and the local amplitude image of the tested stereo image through optimization, obtains the image quality vector of each sub-block in the local phase image and the local amplitude image under different distortion types, which are obtained by construction in the training stage, and finally obtains the image quality objective evaluation predicted value of the tested stereo image through multi-distortion fusion, local global fusion, left-right viewpoint fusion and phase amplitude fusion of the sparse coefficient matrix and the image quality vector, and keeps better consistency with the subjective evaluation value.
Drawings
Fig. 1 is a block diagram of the overall implementation of the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The overall implementation block diagram of the objective evaluation method for the quality of the asymmetric multi-distortion stereo image provided by the invention is shown in fig. 1, and the method comprises a training stage and a testing stage. The specific steps of the training phase process are as follows:
① _1, selecting N original undistorted stereo images with width W and height H, then respectively performing JPEG distortion, Gaussian blur distortion and Gaussian white noise distortion of L different distortion intensities on each original undistorted stereo image to obtain JPEG distorted stereo images of L distortion intensities, Gaussian blur distorted stereo images of L distortion intensities and Gaussian white noise distorted stereo images of L distortion intensities corresponding to each original undistorted stereo image, and then forming a first training image set by all original undistorted stereo images and JPEG distorted stereo images of L distortion intensities corresponding to each original undistorted stereo image, and recording the first training image set as a first training image set
Figure BDA0001515476950000191
And all original undistorted stereo images and L corresponding Gaussian blur distorted stereo images with distortion intensity form a second training image set which is recorded as
Figure BDA0001515476950000192
All original undistorted stereo images and L distortion intensity Gaussian white noise distorted stereo images corresponding to the original undistorted stereo images form a third training image set, and the third training image set is recorded as
Figure BDA0001515476950000193
Wherein N is>In this example, N is 10, L>1, in this embodiment, L is 3,
Figure BDA0001515476950000194
to represent
Figure BDA0001515476950000195
And
Figure BDA0001515476950000196
the u-th original undistorted stereo image in (a),
Figure BDA0001515476950000197
to represent
Figure BDA0001515476950000198
The distorted stereo image of the v-th distortion intensity corresponding to the original undistorted stereo image in (1),
Figure BDA0001515476950000199
to represent
Figure BDA00015154769500001910
The distorted stereo image of the v-th distortion intensity corresponding to the original undistorted stereo image in (1),
Figure BDA00015154769500001911
to represent
Figure BDA00015154769500001912
The symbol "{ }" is a set representing a symbol of the distorted stereoscopic image of the v-th distortion strength corresponding to the u-th original undistorted stereoscopic image in (a).
In specific implementation, 10 original undistorted stereo images are taken, and each original undistorted stereo image is added with 3 distortion intensity JPEG distortions, 3 distortion intensity Gaussian blur distortions and 3 distortion intensity Gaussian white noise distortions, so that a first training image set consisting of 10 original undistorted stereo images and 30 JPEG distorted stereo images, a second training image set consisting of 10 original undistorted stereo images and 30 Gaussian blur distorted stereo images and a third training image set consisting of 10 original undistorted stereo images and 30 Gaussian white noise distorted stereo images are obtained.
① _2, respectively obtaining by 6 different full reference image quality evaluation methods
Figure BDA0001515476950000201
And
Figure BDA0001515476950000202
objective evaluation prediction value of each distorted stereo image; then will be
Figure BDA0001515476950000203
The 6 objective evaluation predicted values of each distorted three-dimensional image form the image quality vector of the distorted three-dimensional image in sequence, and the image quality vector of the distorted three-dimensional image is obtained
Figure BDA0001515476950000204
The 6 objective evaluation predicted values of each distorted three-dimensional image form the image quality vector of the distorted three-dimensional image in sequence, and the image quality vector of the distorted three-dimensional image is obtained
Figure BDA0001515476950000205
The 6 objective evaluation predicted values of each distorted stereo image form the image quality vector of the distorted stereo image in sequence.
In this embodiment, the 6 different full-reference image quality evaluation methods adopted are the known PSNR, MS-SSIM, FSIM, VIF, IW-SSIM, and UQI full-reference image quality evaluation methods, respectively.
① _3, will
Figure BDA0001515476950000206
The image quality vectors and the average subjective score difference value of all the distorted stereo images form a first training sample data set; then, a method of supporting vector regression is adopted as machine learning, all image quality vectors in the first training sample data set are trained, so that the error between a regression function value obtained through training and a subjective quality recommended value is minimum, and an optimal weight vector is obtained through fitting
Figure BDA0001515476950000207
And an optimal bias term
Figure BDA0001515476950000208
Then use
Figure BDA0001515476950000209
And
Figure BDA00015154769500002010
constructing a first quality prediction model, denoted as g1(y1),
Figure BDA00015154769500002011
Wherein, g1() In the form of a function, y1For representing an image quality vector, and as an input vector to a first quality prediction model,
Figure BDA00015154769500002012
is composed of
Figure BDA00015154769500002013
The transpose of (a) is performed,
Figure BDA00015154769500002014
is y1Is a linear function of (a).
Also, will
Figure BDA00015154769500002015
The image quality vectors and the average subjective score difference values of all the distorted stereo images form a second training sample data set; then, a support vector regression is adopted as a machine learning method to train all image quality vectors in the second training sample data set, so that the error between a regression function value obtained through training and a subjective quality recommended value is minimum, and an optimal weight vector is obtained through fitting
Figure BDA0001515476950000211
And an optimal bias term
Figure BDA0001515476950000212
Then use
Figure BDA0001515476950000213
And
Figure BDA0001515476950000214
constructing a second quality prediction model, denoted as g2(y2),
Figure BDA0001515476950000215
Wherein, g2() In the form of a function, y2For representing an image quality vector, and as an input vector to a second quality prediction model,
Figure BDA0001515476950000216
is composed of
Figure BDA0001515476950000217
The transpose of (a) is performed,
Figure BDA0001515476950000218
is y2Is a linear function of (a).
Also, will
Figure BDA0001515476950000219
The image quality vectors and the average subjective score difference values of all the distorted stereo images form a third training sample data set; then, a support vector regression is adopted as a machine learning method to train all image quality vectors in the third training sample data set, so that the error between a regression function value obtained through training and a subjective quality recommended value is minimum, and an optimal weight vector is obtained through fitting
Figure BDA00015154769500002110
And an optimal bias term
Figure BDA00015154769500002111
Then use
Figure BDA00015154769500002112
And
Figure BDA00015154769500002113
constructing a third quality prediction model, denoted as g3(y3),
Figure BDA00015154769500002114
Wherein, g3() In the form of a function, y3For representing an image quality vector, and as an input vector to a third quality prediction model,
Figure BDA00015154769500002115
is composed of
Figure BDA00015154769500002116
The transpose of (a) is performed,
Figure BDA00015154769500002117
is y3Is a linear function of (a).
① _4, calculation
Figure BDA00015154769500002118
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure BDA00015154769500002119
Will be the local phase image and the local amplitude image of each distorted stereo image
Figure BDA00015154769500002120
The local phase image and the local amplitude image are associated as
Figure BDA00015154769500002121
And
Figure BDA00015154769500002122
then will be
Figure BDA00015154769500002123
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure BDA00015154769500002124
And will be
Figure BDA00015154769500002125
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure BDA00015154769500002126
Also, calculate
Figure BDA00015154769500002127
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure BDA00015154769500002128
Will be the local phase image and the local amplitude image of each distorted stereo image
Figure BDA00015154769500002129
The local phase image and the local amplitude image are associated as
Figure BDA00015154769500002130
And
Figure BDA00015154769500002131
then will be
Figure BDA00015154769500002132
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure BDA0001515476950000221
And will be
Figure BDA0001515476950000222
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure BDA0001515476950000223
Also, calculate
Figure BDA0001515476950000224
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure BDA0001515476950000225
Will be the local phase image and the local amplitude image of each distorted stereo image
Figure BDA0001515476950000226
The local phase image and the local amplitude image are associated as
Figure BDA0001515476950000227
And
Figure BDA0001515476950000228
then will be
Figure BDA0001515476950000229
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure BDA00015154769500002210
And will be
Figure BDA00015154769500002211
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure BDA00015154769500002212
In this embodiment, in step ① _4,
Figure BDA00015154769500002213
and
Figure BDA00015154769500002214
the acquisition process comprises the following steps:
①_4a、using Log-Gabor filter pairs
Figure BDA00015154769500002215
Each pixel point in the image is filtered to obtain
Figure BDA00015154769500002216
The even symmetric frequency response and the odd symmetric frequency response of each pixel point in different scales and directions will be
Figure BDA00015154769500002217
The even symmetric frequency response of the pixel point with the middle coordinate position (x, y) in different scales and directions is recorded as eα,θ(x, y) is
Figure BDA00015154769500002218
The odd symmetric frequency response of the pixel point with the middle coordinate position (x, y) in different scales and directions is recorded as oα,θ(x, y), wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, α represents the scale factor of the Log-Gabor filter,
Figure BDA00015154769500002219
Figure BDA00015154769500002220
theta denotes a direction factor of the Log-Gabor filter,
Figure BDA00015154769500002221
Figure BDA00015154769500002222
① _4b, calculation
Figure BDA00015154769500002223
The phase consistency characteristics of each pixel point in different directions are
Figure BDA00015154769500002224
The phase consistency characteristics of the pixel points with the (x, y) middle coordinate position in different directions are recorded as PCθ(x,y),
Figure BDA00015154769500002225
Wherein the content of the first and second substances,
Figure BDA00015154769500002226
Figure BDA00015154769500002227
① _4c, according to
Figure BDA0001515476950000231
The direction corresponding to the maximum phase consistency characteristic of each pixel point in the image is calculated
Figure BDA0001515476950000232
The local phase characteristic and the local amplitude characteristic of each pixel point in the image; for the
Figure BDA0001515476950000233
And (3) finding out the maximum phase consistency characteristic of the pixel point with the (x, y) middle coordinate position in the phase consistency characteristics in different directions, finding out the direction corresponding to the maximum phase consistency characteristic, and marking as thetamAgain according to thetamCalculating the local phase characteristic and the local amplitude characteristic of the pixel point, and correspondingly marking as
Figure BDA0001515476950000234
And
Figure BDA0001515476950000235
Figure BDA0001515476950000236
wherein the content of the first and second substances,
Figure BDA0001515476950000237
arctan () is an inverted cosine function,
Figure BDA0001515476950000238
Figure BDA0001515476950000239
to represent
Figure BDA00015154769500002310
The pixel point with the middle coordinate position (x, y) is in the direction theta corresponding to the different scales and the maximum phase consistency characteristics of the pixel pointmThe odd-symmetric frequency response of (a),
Figure BDA00015154769500002311
Figure BDA00015154769500002312
to represent
Figure BDA00015154769500002313
The pixel point with the middle coordinate position (x, y) is in the direction theta corresponding to the different scales and the maximum phase consistency characteristics of the pixel pointmThe even-symmetric frequency response of the frequency domain,
Figure BDA00015154769500002314
① _4d, according to
Figure BDA00015154769500002315
The local phase characteristics of all the pixel points in the image are obtained
Figure BDA00015154769500002316
Local phase image of
Figure BDA00015154769500002317
Also according to
Figure BDA00015154769500002318
Obtaining the local amplitude characteristics of all the pixel points in the image
Figure BDA00015154769500002319
Local amplitude image of
Figure BDA00015154769500002320
Obtaining according to steps ① _4a through ① _4d
Figure BDA00015154769500002321
And
Figure BDA00015154769500002322
in the same manner as the procedure of (1)
Figure BDA00015154769500002323
And
Figure BDA00015154769500002324
and
Figure BDA00015154769500002325
① _5, will
Figure BDA00015154769500002326
Each local phase image of (1) and
Figure BDA00015154769500002327
each local amplitude image in (1) is divided into
Figure BDA00015154769500002328
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure BDA00015154769500002329
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA00015154769500002330
The image feature vector formed by the pixel values of all the pixel points in the kth sub-block in all the local phase images in sequence is recorded as
Figure BDA00015154769500002331
And will be
Figure BDA00015154769500002332
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA00015154769500002333
The image feature vector formed by the pixel values of all the pixel points in the k-th sub-block in all the local amplitude images in sequence is recorded as
Figure BDA0001515476950000241
Then will be
Figure BDA0001515476950000242
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure BDA0001515476950000243
And will be
Figure BDA0001515476950000244
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure BDA0001515476950000245
Wherein, the symbol
Figure BDA0001515476950000246
Is a sign of a lower rounding operation, k is more than or equal to 1 and less than or equal to M,
Figure BDA0001515476950000247
Figure BDA0001515476950000248
and
Figure BDA0001515476950000249
are each 64 x 1.
Also, will
Figure BDA00015154769500002410
Each local phase image of (1) and
Figure BDA00015154769500002411
each local amplitude image in (1) is divided into
Figure BDA00015154769500002412
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure BDA00015154769500002413
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA00015154769500002414
The image feature vector formed by the pixel values of all the pixel points in the kth sub-block in all the local phase images in sequence is recorded as
Figure BDA00015154769500002415
And will be
Figure BDA00015154769500002416
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA00015154769500002417
The image feature vector formed by the pixel values of all the pixel points in the k-th sub-block in all the local amplitude images in sequence is recorded as
Figure BDA00015154769500002418
Then will be
Figure BDA00015154769500002419
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure BDA00015154769500002420
And will be
Figure BDA00015154769500002421
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure BDA00015154769500002422
Wherein the content of the first and second substances,
Figure BDA00015154769500002423
and
Figure BDA00015154769500002424
are each 64 x 1.
Also, will
Figure BDA00015154769500002425
Each local phase image of (1) and
Figure BDA00015154769500002426
each local amplitude image in (1) is divided into
Figure BDA00015154769500002427
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure BDA00015154769500002428
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA0001515476950000251
The image feature vector formed by the pixel values of all the pixel points in the kth sub-block in all the local phase images in sequence is recorded as
Figure BDA0001515476950000252
And will be
Figure BDA0001515476950000253
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure BDA0001515476950000254
The image feature vector formed by the pixel values of all the pixel points in the k-th sub-block in all the local amplitude images in sequence is recorded as
Figure BDA0001515476950000255
Then will be
Figure BDA0001515476950000256
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure BDA0001515476950000257
And will be
Figure BDA0001515476950000258
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure BDA0001515476950000259
Wherein the content of the first and second substances,
Figure BDA00015154769500002510
and
Figure BDA00015154769500002511
are each 64 x 1.
① _6, will
Figure BDA00015154769500002512
Each distorted stereo image division in (1)
Figure BDA00015154769500002513
Sub-blocks with size of 8 × 8 and not overlapped with each other; then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure BDA00015154769500002514
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure BDA00015154769500002515
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure BDA00015154769500002516
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y1,k(ii) a Then will be
Figure BDA00015154769500002517
The set of image quality vectors of all the subblocks in all the distorted stereoscopic images in (1) is denoted by { y }1,kL 1 is more than or equal to k and less than or equal to M }; wherein, y1,kHas a dimension of 6 x 1.
Also, will
Figure BDA00015154769500002518
Each distorted stereo image division in (1)
Figure BDA00015154769500002519
Sub-blocks with size of 8 × 8 and not overlapped with each other; then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure BDA00015154769500002520
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure BDA00015154769500002521
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure BDA00015154769500002522
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y2,k(ii) a Then will be
Figure BDA0001515476950000261
The set of image quality vectors of all the subblocks in all the distorted stereoscopic images in (1) is denoted by { y }2,kL 1 is more than or equal to k and less than or equal to M }; wherein, y2,kHas a dimension of 6 x 1.
Also, will
Figure BDA0001515476950000262
Each distorted stereo image division in (1)
Figure BDA0001515476950000263
Sub-blocks with size of 8 × 8 and not overlapped with each other; then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure BDA0001515476950000264
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure BDA0001515476950000265
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure BDA0001515476950000266
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y3,k(ii) a Then will be
Figure BDA0001515476950000267
The set of image quality vectors of all the subblocks in all the distorted stereoscopic images in (1) is denoted by { y }3,kL 1 is more than or equal to k and less than or equal to M }; wherein, y3,kHas a dimension of 6 x 1.
In this embodiment, the 6 different full-reference image quality evaluation methods adopted are the known PSNR, MS-SSIM, FSIM, VIF, IW-SSIM, and UQI full-reference image quality evaluation methods, respectively.
① _7, pair of bags by K-SVD method
Figure BDA0001515476950000268
{y1,k|1≤k≤M}、{y2,kL 1 is less than or equal to k is less than or equal to M and y3,kThe aggregate formed by |1 ≦ k ≦ M } is subjected to the joint dictionary training operation, and the structure is obtained
Figure BDA0001515476950000269
Respective image feature dictionary table and image quality dictionary table, corresponding
Figure BDA00015154769500002610
And
Figure BDA00015154769500002611
wherein the content of the first and second substances,
Figure BDA00015154769500002612
and
Figure BDA00015154769500002613
the dimensions of (a) are all 64 x K,
Figure BDA00015154769500002614
and
Figure BDA00015154769500002615
the dimensions of (a) are all 6 × K, K represents the number of set dictionaries, K is equal to or greater than 1, and in this embodiment, K is 256.
Also, the K-SVD method is adopted to pair
Figure BDA00015154769500002616
{y1,k|1≤k≤M}、{y2,kL 1 is less than or equal to k is less than or equal to M and y3,kThe aggregate formed by |1 ≦ k ≦ M } is subjected to the joint dictionary training operation, and the structure is obtained
Figure BDA00015154769500002617
And
Figure BDA00015154769500002618
respective image feature dictionary table and image quality dictionary table, corresponding
Figure BDA00015154769500002619
Figure BDA0001515476950000271
And
Figure BDA0001515476950000272
wherein the content of the first and second substances,
Figure BDA0001515476950000273
and
Figure BDA0001515476950000274
the dimensions of (a) are all 64 x K,
Figure BDA0001515476950000275
and
Figure BDA0001515476950000276
the dimensions of (A) are all 6 XK.
In this embodiment, in step ① _7,
Figure BDA0001515476950000277
and
Figure BDA0001515476950000278
solving by adopting the existing K-SVD method
Figure BDA0001515476950000279
Obtained, where min () is a minimum function, the symbol "| | | | luminanceF"is a Frobenius norm-norm symbol for matrix calculation, the symbol" | | | | | | torry1"is to find the 1-norm sign of the matrix, s is more than or equal to 1 and less than or equal to 3,
Figure BDA00015154769500002710
Figure BDA00015154769500002711
Figure BDA00015154769500002712
and
Figure BDA00015154769500002713
the dimensions of (a) are all 64 x M,
Figure BDA00015154769500002714
is composed of
Figure BDA00015154769500002715
The 1 st first image feature vector in (a),
Figure BDA00015154769500002716
is composed of
Figure BDA00015154769500002717
The kth first image feature vector in (1),
Figure BDA00015154769500002718
is composed of
Figure BDA00015154769500002719
The mth first image feature vector in (1),
Figure BDA00015154769500002720
is composed of
Figure BDA00015154769500002721
The 1 st first image feature vector in (a),
Figure BDA00015154769500002722
is composed of
Figure BDA00015154769500002723
The kth first image feature vector in (1),
Figure BDA00015154769500002724
is composed of
Figure BDA00015154769500002725
The mth first image feature vector in (1),
Figure BDA00015154769500002726
is composed of
Figure BDA00015154769500002727
The 1 st first image feature vector in (a),
Figure BDA00015154769500002728
is composed of
Figure BDA00015154769500002729
The kth first image feature vector in (1),
Figure BDA00015154769500002730
is composed of
Figure BDA00015154769500002731
M-th first image feature vector, Y1=[y1,1…y1,k…y1,M],Y2=[y2,1…y2,k…y2,M],Y3=[y3,1…y3,k…y3,M],Y1、Y2And Y3All dimensions of (a) are 6 XM, y1,1Is { y1,k1 st image quality vector, y in |1 ≦ k ≦ M }1,kIs { y1,kK-th image quality vector in |1 ≦ k ≦ M ≦ y1,MIs { y1,kMth image quality vector in |1 ≦ k ≦ M ≦ y2,1Is { y2,k1 st image quality vector, y in |1 ≦ k ≦ M }2,kIs { y2,kK-th image quality vector in |1 ≦ k ≦ M ≦ y2,MIs { y2,kMth image quality vector in |1 ≦ k ≦ M ≦ y3,1Is { y3,k1 st image quality vector, y in |1 ≦ k ≦ M }3,kIs { y3,kK-th image quality vector in |1 ≦ k ≦ M ≦ y3,MIs { y3,kThe Mth image quality vector in |1 ≦ k ≦ M },
Figure BDA00015154769500002732
and
Figure BDA00015154769500002733
each of which represents a sparse matrix and each of which represents,
Figure BDA00015154769500002734
Figure BDA00015154769500002735
and
Figure BDA0001515476950000281
the dimensions of (A) are all K multiplied by M,
Figure BDA0001515476950000282
is composed of
Figure BDA0001515476950000283
The 1 st column vector of (1),
Figure BDA0001515476950000284
is composed of
Figure BDA0001515476950000285
The k-th column vector of (a),
Figure BDA0001515476950000286
is composed of
Figure BDA0001515476950000287
The M-th column vector of (1),
Figure BDA0001515476950000288
is composed of
Figure BDA0001515476950000289
The 1 st column vector of (1),
Figure BDA00015154769500002810
is composed of
Figure BDA00015154769500002811
The k-th column vector of (a),
Figure BDA00015154769500002812
is composed of
Figure BDA00015154769500002813
The M-th column vector of (1),
Figure BDA00015154769500002814
is composed of
Figure BDA00015154769500002815
The 1 st column vector of (1),
Figure BDA00015154769500002816
is composed of
Figure BDA00015154769500002817
The k-th column vector of (a),
Figure BDA00015154769500002818
is composed of
Figure BDA00015154769500002819
The M-th column vector of (1),
Figure BDA00015154769500002820
the dimension of (A) is K x 1, symbol [ "]]"is a vector symbol, γ is a weighting parameter, and γ is 0.5 and λ is a lagrangian parameter in this example, and λ is 0.15 in this example.
In step ① _7, the user selects,
Figure BDA00015154769500002821
and
Figure BDA00015154769500002822
the existing K-SVD method is adopted to solveSolution (II)
Figure BDA00015154769500002823
The process for preparing a novel compound of formula (I),
Figure BDA00015154769500002824
Figure BDA00015154769500002825
and
Figure BDA00015154769500002826
the dimensions of (a) are all 64 x M,
Figure BDA00015154769500002827
is composed of
Figure BDA00015154769500002828
The 1 st second image feature vector in (b),
Figure BDA00015154769500002829
is composed of
Figure BDA00015154769500002830
The kth second image feature vector in (b),
Figure BDA00015154769500002831
is composed of
Figure BDA00015154769500002832
The mth second image feature vector in (1),
Figure BDA00015154769500002833
is composed of
Figure BDA00015154769500002834
The 1 st second image feature vector in (b),
Figure BDA00015154769500002835
is composed of
Figure BDA00015154769500002836
The kth second image feature vector in (b),
Figure BDA00015154769500002837
is composed of
Figure BDA00015154769500002838
The mth second image feature vector in (1),
Figure BDA00015154769500002839
is composed of
Figure BDA00015154769500002840
The 1 st second image feature vector in (b),
Figure BDA00015154769500002841
is composed of
Figure BDA00015154769500002842
The kth second image feature vector in (b),
Figure BDA00015154769500002843
is composed of
Figure BDA00015154769500002844
The mth second image feature vector in (1),
Figure BDA00015154769500002845
and
Figure BDA00015154769500002846
each of which represents a sparse matrix and each of which represents,
Figure BDA00015154769500002847
Figure BDA00015154769500002848
Figure BDA00015154769500002849
and
Figure BDA00015154769500002850
the dimensions of (A) are all K multiplied by M,
Figure BDA00015154769500002851
is composed of
Figure BDA00015154769500002852
The 1 st column vector of (1),
Figure BDA00015154769500002853
is composed of
Figure BDA00015154769500002854
The k-th column vector of (a),
Figure BDA00015154769500002855
is composed of
Figure BDA00015154769500002856
The M-th column vector of (1),
Figure BDA00015154769500002857
is composed of
Figure BDA00015154769500002858
The 1 st column vector of (1),
Figure BDA00015154769500002859
is composed of
Figure BDA00015154769500002860
The k-th column vector of (a),
Figure BDA00015154769500002861
is composed of
Figure BDA00015154769500002862
The M-th column vector of (1),
Figure BDA00015154769500002863
is composed of
Figure BDA00015154769500002864
The 1 st column vector of (1),
Figure BDA00015154769500002865
is composed of
Figure BDA00015154769500002866
The k-th column vector of (a),
Figure BDA00015154769500002867
is composed of
Figure BDA00015154769500002868
The M-th column vector of (1),
Figure BDA00015154769500002869
the dimensions of (A) are each K × 1.
The specific steps of the test phase process are as follows:
② _1, for any test stereo image S with width W' and height HtestWill StestIs recorded as LtestWill StestIs recorded as Rtest(ii) a Wherein W 'is the same as or different from W, and H' is the same as or different from H.
② _2, obtaining S in the same operation as step ① _4test、LtestAnd RtestRespective local phase image and local amplitude image, and converting L into LtestThe local phase image and the local amplitude image are associated as
Figure BDA0001515476950000291
And
Figure BDA0001515476950000292
r is to betestThe local phase image and the local amplitude image are associated as
Figure BDA0001515476950000293
And
Figure BDA0001515476950000294
② _3, will
Figure BDA0001515476950000295
And
Figure BDA0001515476950000296
are respectively divided into
Figure BDA0001515476950000297
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure BDA0001515476950000298
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure BDA0001515476950000299
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure BDA00015154769500002910
And will be
Figure BDA00015154769500002911
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure BDA00015154769500002912
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure BDA00015154769500002913
Will be provided with
Figure BDA00015154769500002914
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure BDA00015154769500002915
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure BDA00015154769500002916
Will be provided with
Figure BDA00015154769500002917
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure BDA00015154769500002918
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure BDA00015154769500002919
Then will be
Figure BDA00015154769500002920
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure BDA00015154769500002921
And will be
Figure BDA00015154769500002922
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure BDA00015154769500002923
Will be provided with
Figure BDA00015154769500002924
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure BDA00015154769500002925
Will be provided with
Figure BDA00015154769500002926
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure BDA0001515476950000301
Wherein the content of the first and second substances,
Figure BDA0001515476950000302
Figure BDA0001515476950000303
and
Figure BDA0001515476950000304
are each 64 x 1.
② _4 constructed according to the procedure in the training phase
Figure BDA0001515476950000305
Separately optimized reconstruction
Figure BDA0001515476950000306
And
Figure BDA0001515476950000307
a first sparse coefficient matrix for each respective image feature vector
Figure BDA0001515476950000308
Is recorded as a first sparse coefficient matrix
Figure BDA0001515476950000309
Figure BDA00015154769500003010
Is solved by adopting a K-SVD method
Figure BDA00015154769500003011
Obtained by
Figure BDA00015154769500003012
Is recorded as a first sparse coefficient matrix
Figure BDA00015154769500003013
Figure BDA00015154769500003014
Is solved by adopting a K-SVD method
Figure BDA00015154769500003015
And (4) obtaining the product.
Also constructed from the process during the training phase
Figure BDA00015154769500003016
Separately optimized reconstruction
Figure BDA00015154769500003017
And
Figure BDA00015154769500003018
a second sparse coefficient matrix for each respective image feature vector
Figure BDA00015154769500003019
Is recorded as a second sparse coefficient matrix
Figure BDA00015154769500003020
Figure BDA00015154769500003021
Is solved by adopting a K-SVD method
Figure BDA00015154769500003022
Obtained by
Figure BDA00015154769500003023
Is recorded as a second sparse coefficient matrix
Figure BDA00015154769500003024
Figure BDA00015154769500003025
Is solved by adopting a K-SVD method
Figure BDA00015154769500003026
And (4) obtaining the product.
Also constructed from the process during the training phase
Figure BDA00015154769500003027
Separately optimized reconstruction
Figure BDA00015154769500003028
And
Figure BDA00015154769500003029
a third sparse coefficient matrix for each respective image feature vector
Figure BDA00015154769500003030
Is recorded as a third sparse coefficient matrix
Figure BDA00015154769500003031
Figure BDA00015154769500003032
Is solved by adopting a K-SVD method
Figure BDA00015154769500003033
Obtained by
Figure BDA00015154769500003034
Is recorded as a third sparse coefficient matrix
Figure BDA00015154769500003035
Figure BDA00015154769500003036
Is solved by adopting a K-SVD method
Figure BDA00015154769500003037
And (4) obtaining the product.
Also constructed from the process during the training phase
Figure BDA0001515476950000311
Separately optimized reconstruction
Figure BDA0001515476950000312
And
Figure BDA0001515476950000313
a first sparse coefficient matrix for each respective image feature vector
Figure BDA0001515476950000314
Is recorded as a first sparse coefficient matrix
Figure BDA0001515476950000315
Figure BDA0001515476950000316
Is solved by adopting a K-SVD method
Figure BDA0001515476950000317
Obtained by
Figure BDA0001515476950000318
Is recorded as a first sparse coefficient matrix
Figure BDA0001515476950000319
Figure BDA00015154769500003110
Is solved by adopting a K-SVD method
Figure BDA00015154769500003111
And (4) obtaining the product.
Also constructed from the process during the training phase
Figure BDA00015154769500003112
Separately optimized reconstruction
Figure BDA00015154769500003113
And
Figure BDA00015154769500003114
second sparse coefficient of each image feature vector in eachMatrix of
Figure BDA00015154769500003115
Is expressed as a sparse coefficient matrix
Figure BDA00015154769500003116
Figure BDA00015154769500003117
Is solved by adopting a K-SVD method
Figure BDA00015154769500003118
Obtained by
Figure BDA00015154769500003119
Is recorded as a second sparse coefficient matrix
Figure BDA00015154769500003120
Figure BDA00015154769500003121
Is solved by adopting a K-SVD method
Figure BDA00015154769500003122
And (4) obtaining the product.
Also constructed from the process during the training phase
Figure BDA00015154769500003123
Separately optimized reconstruction
Figure BDA00015154769500003124
And
Figure BDA00015154769500003125
a third sparse coefficient matrix for each respective image feature vector
Figure BDA00015154769500003126
Is recorded as a third sparse coefficient matrix
Figure BDA00015154769500003127
Figure BDA00015154769500003128
Is solved by adopting a K-SVD method
Figure BDA00015154769500003129
Obtained by
Figure BDA00015154769500003130
Is recorded as a third sparse coefficient matrix
Figure BDA00015154769500003131
Figure BDA00015154769500003132
Is solved by adopting a K-SVD method
Figure BDA00015154769500003133
And (4) obtaining the product.
Wherein the content of the first and second substances,
Figure BDA00015154769500003134
and
Figure BDA00015154769500003135
all dimensions of (a) are Kx 1, min () is a minimum function, and the symbol "| | | | non-conducting filamentF"is a Flobenius norm-norm symbol for solving the matrix, the symbol" | | | | | | luminance1"is the 1-norm sign of the solved matrix, and λ is the Lagrangian parameter.
② _5 constructed according to the procedure in the training phase
Figure BDA0001515476950000321
Estimate separately
Figure BDA0001515476950000322
And
Figure BDA0001515476950000323
the first image quality vector of each sub-block in each will
Figure BDA0001515476950000324
The first image quality vector of the t sub-block of (1) is noted
Figure BDA0001515476950000325
Figure BDA0001515476950000326
Will be provided with
Figure BDA0001515476950000327
The first image quality vector of the t sub-block of (1) is noted
Figure BDA0001515476950000328
Figure BDA0001515476950000329
Also constructed from the process during the training phase
Figure BDA00015154769500003210
Estimate separately
Figure BDA00015154769500003211
And
Figure BDA00015154769500003212
the second image quality vector of each sub-block in each will
Figure BDA00015154769500003213
The second image quality vector of the t sub-block of (1)
Figure BDA00015154769500003214
Figure BDA00015154769500003215
Will be provided with
Figure BDA00015154769500003216
Second image quality of the t sub-block in (1)The vector is noted as
Figure BDA00015154769500003217
Figure BDA00015154769500003218
Also constructed from the process during the training phase
Figure BDA00015154769500003219
Estimate separately
Figure BDA00015154769500003220
And
Figure BDA00015154769500003221
the third image quality vector of each sub-block in each
Figure BDA00015154769500003222
The third image quality vector of the t sub-block of (1)
Figure BDA00015154769500003223
Figure BDA00015154769500003224
Will be provided with
Figure BDA00015154769500003225
The third image quality vector of the t sub-block of (1)
Figure BDA00015154769500003226
Figure BDA00015154769500003227
Also constructed from the process during the training phase
Figure BDA00015154769500003228
Estimate separately
Figure BDA00015154769500003229
And
Figure BDA00015154769500003230
the first image quality vector of each sub-block in each will
Figure BDA00015154769500003231
The first image quality vector of the t sub-block of (1) is noted
Figure BDA00015154769500003232
Figure BDA00015154769500003233
Will be provided with
Figure BDA00015154769500003234
The first image quality vector of the t sub-block of (1) is noted
Figure BDA00015154769500003235
Figure BDA00015154769500003236
Also constructed from the process during the training phase
Figure BDA00015154769500003237
Estimate separately
Figure BDA00015154769500003238
And
Figure BDA00015154769500003239
the second image quality vector of each sub-block in each will
Figure BDA00015154769500003240
The second image quality vector of the t sub-block of (1)
Figure BDA00015154769500003241
Figure BDA00015154769500003242
Will be provided with
Figure BDA00015154769500003243
The second image quality vector of the t sub-block of (1)
Figure BDA00015154769500003244
Figure BDA00015154769500003245
Also constructed from the process during the training phase
Figure BDA00015154769500003246
Estimate separately
Figure BDA00015154769500003247
And
Figure BDA00015154769500003248
the third image quality vector of each sub-block in each
Figure BDA00015154769500003249
The third image quality vector of the t sub-block of (1)
Figure BDA00015154769500003250
Figure BDA00015154769500003251
Will be provided with
Figure BDA00015154769500003252
The third image quality vector of the t sub-block of (1)
Figure BDA00015154769500003253
Figure BDA00015154769500003254
Wherein the content of the first and second substances,
Figure BDA00015154769500003255
and
Figure BDA00015154769500003256
the dimensions of (a) are all 6 × 1.
② _6, calculation
Figure BDA00015154769500003257
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure BDA0001515476950000331
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure BDA0001515476950000332
And
Figure BDA0001515476950000333
Figure BDA0001515476950000334
wherein exp () represents an exponential function with a natural base e as a base, and the symbol "| | | | purple2"2-norm sign of matrix is obtained, η is control parameter, η is 1000 in this embodiment,
Figure BDA0001515476950000335
is composed of
Figure BDA0001515476950000336
The input vector of (1).
Also, calculate
Figure BDA0001515476950000337
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure BDA0001515476950000338
In (1)The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block are recorded as
Figure BDA0001515476950000339
And
Figure BDA00015154769500003310
Figure BDA00015154769500003311
Figure BDA00015154769500003312
wherein the content of the first and second substances,
Figure BDA00015154769500003313
is composed of
Figure BDA00015154769500003314
The input vector of (1).
Also, calculate
Figure BDA00015154769500003315
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure BDA00015154769500003316
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure BDA00015154769500003317
And
Figure BDA00015154769500003318
Figure BDA00015154769500003319
wherein the content of the first and second substances,
Figure BDA0001515476950000341
is composed of
Figure BDA0001515476950000342
The input vector of (1).
Also, calculate
Figure BDA0001515476950000343
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure BDA0001515476950000344
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure BDA0001515476950000345
And
Figure BDA0001515476950000346
Figure BDA0001515476950000347
Figure BDA0001515476950000348
wherein the content of the first and second substances,
Figure BDA0001515476950000349
is composed of
Figure BDA00015154769500003410
The input vector of (1).
② _7, calculation
Figure BDA00015154769500003411
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure BDA00015154769500003412
And QL,P
Figure BDA00015154769500003413
Also, calculate
Figure BDA00015154769500003414
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure BDA00015154769500003415
And QR,P
Figure BDA00015154769500003416
Also, calculate
Figure BDA00015154769500003417
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure BDA00015154769500003418
And QL,A
Figure BDA0001515476950000351
Also, calculate
Figure BDA0001515476950000352
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure BDA0001515476950000353
And QR,A
Figure BDA0001515476950000354
② _8, according to
Figure BDA0001515476950000355
And
Figure BDA0001515476950000356
and QL,PAnd QR,PCalculating StestThe predicted value of the objective quality evaluation of the local phase image is recorded as QP,QP=ωL,P×QL,PR,P×QR,P(ii) a Wherein, ω isL,PIs QL,PThe weight of (a) is calculated,
Figure BDA0001515476950000357
ωR,Pis QR,PThe weight of (a) is calculated,
Figure BDA0001515476950000358
symbol'<>"is the inner product symbol, C is the control parameter, and in this example, C is 0.02.
Also according to
Figure BDA0001515476950000359
And
Figure BDA00015154769500003510
and QL,AAnd QR,ACalculating StestThe predicted value of the objective evaluation of the quality of the local amplitude image is marked as QA,QA=ωL,A×QL,AR,A×QR,A(ii) a Wherein, ω isL,AIs QL,AThe weight of (a) is calculated,
Figure BDA00015154769500003511
ωR,Ais QR,AThe weight of (a) is calculated,
Figure BDA00015154769500003512
② _9, according to QPAnd QACalculating StestThe predicted value of the objective evaluation of image quality is expressed as Q, Q ═ ω (ω)P×(QP)n+(1-ωP)×(QA)n)1/n(ii) a Wherein, ω isPAnd n are weighted parameters, in this example, take ωP=0.35、n=1。
In this embodiment, an asymmetric multi-distortion stereo image database established by Ningbo university is used to analyze the correlation between the image quality objective evaluation prediction value of the distorted stereo image obtained in this embodiment and the average subjective score difference value, where the asymmetric multi-distortion stereo image database established by Ningbo university includes 3000 asymmetric multi-distortion stereo images, and the average subjective score difference value of each distorted stereo image in the asymmetric multi-distortion stereo image database is obtained by using a subjective quality evaluation method.
In this embodiment, 3 common objective parameters of the evaluation method for evaluating image quality are used as evaluation indexes, that is, a Pearson correlation coefficient (PLCC), a Spearman correlation coefficient (SROCC), a mean square error (RMSE), and the PLCC and the RMSE reflect the accuracy of the objective evaluation model for distorted stereoscopic images, and the SROCC reflects monotonicity thereof under a nonlinear regression condition. The Pearson correlation coefficient, the Spearman correlation coefficient and the mean square error between the image quality objective evaluation predicted value of the distorted three-dimensional image and the average subjective score difference value, which are obtained by respectively adopting the method of the invention and the known PSNR and SSIM full reference quality evaluation methods, are compared, and the comparison result is shown in table 1, and the table 1 shows that the correlation between the image quality objective evaluation predicted value of the distorted three-dimensional image obtained by adopting the method of the invention and the average subjective score difference value is very high, so that the objective evaluation result of the method of the invention is fully consistent with the result of human eye subjective perception, and the feasibility and the effectiveness of the method of the invention can be sufficiently demonstrated.
TABLE 1 Pearson correlation coefficient comparison, Spearman correlation coefficient comparison, and mean square error comparison between objective evaluation predicted value of image quality and mean subjective score difference of distorted stereoscopic images obtained by the method of the present invention and the known full-reference quality evaluation method
Pearson correlation coefficient Spearman correlation coefficient Mean square error
PSNR method 0.7003 0.7139 8.4165
SSIM method 0.7144 0.7339 8.1642
The method of the invention 0.7853 0.7652 7.4416

Claims (3)

1. An objective evaluation method for quality of asymmetric multi-distortion stereo images is characterized by comprising a training stage and a testing stage;
the specific steps of the training phase process are as follows:
① _1, selecting N original undistorted stereo images with width W and height H, then respectively performing JPEG distortion, Gaussian blur distortion and Gaussian white noise distortion with L different distortion intensities on each original undistorted stereo image to obtain JPEG distorted stereo images with L distortion intensities, Gaussian blur distorted stereo images with L distortion intensities and Gaussian white noise distorted stereo images with L distortion intensities corresponding to each original undistorted stereo image, and then composing all original undistorted stereo images and JPEG distorted stereo images with L distortion intensities corresponding to each original undistorted stereo imageA first set of training images, denoted
Figure FDA0001515476940000011
And all original undistorted stereo images and L corresponding Gaussian blur distorted stereo images with distortion intensity form a second training image set which is recorded as
Figure FDA0001515476940000012
All original undistorted stereo images and L distortion intensity Gaussian white noise distorted stereo images corresponding to the original undistorted stereo images form a third training image set, and the third training image set is recorded as
Figure FDA0001515476940000013
Wherein N is>1,L>1,
Figure FDA0001515476940000014
To represent
Figure FDA0001515476940000015
Figure FDA0001515476940000016
And
Figure FDA0001515476940000017
the u-th original undistorted stereo image in (a),
Figure FDA0001515476940000018
to represent
Figure FDA0001515476940000019
The distorted stereo image of the v-th distortion intensity corresponding to the original undistorted stereo image in (1),
Figure FDA00015154769400000110
to represent
Figure FDA00015154769400000111
The distorted stereo image of the v-th distortion intensity corresponding to the original undistorted stereo image in (1),
Figure FDA00015154769400000112
to represent
Figure FDA00015154769400000113
The distortion stereo image with the v distortion intensity corresponding to the u original distortion-free stereo image;
① _2, respectively obtaining by 6 different full reference image quality evaluation methods
Figure DA00015154769436885
Figure FDA00015154769400000114
And
Figure FDA00015154769400000115
objective evaluation prediction value of each distorted stereo image; then will be
Figure FDA00015154769400000116
The 6 objective evaluation predicted values of each distorted three-dimensional image form the image quality vector of the distorted three-dimensional image in sequence, and the image quality vector of the distorted three-dimensional image is obtained
Figure FDA0001515476940000021
The 6 objective evaluation predicted values of each distorted three-dimensional image form the image quality vector of the distorted three-dimensional image in sequence, and the image quality vector of the distorted three-dimensional image is obtained
Figure FDA0001515476940000022
The 6 objective evaluation predicted values of each distorted three-dimensional image form an image quality vector of the distorted three-dimensional image in sequence;
① _3, will
Figure FDA0001515476940000023
The image quality vectors and the average subjective score difference value of all the distorted stereo images form a first training sample data set; then, a method of supporting vector regression is adopted as machine learning, all image quality vectors in the first training sample data set are trained, so that the error between a regression function value obtained through training and a subjective quality recommended value is minimum, and an optimal weight vector is obtained through fitting
Figure FDA0001515476940000024
And an optimal bias term
Figure FDA0001515476940000025
Then use
Figure FDA0001515476940000026
And
Figure FDA0001515476940000027
constructing a first quality prediction model, denoted as g1(y1),
Figure FDA0001515476940000028
Wherein, g1() In the form of a function, y1For representing an image quality vector, and as an input vector to a first quality prediction model,
Figure FDA0001515476940000029
is composed of
Figure FDA00015154769400000210
The transpose of (a) is performed,
Figure FDA00015154769400000211
is y1A linear function of (a);
also, will
Figure FDA00015154769400000212
The image quality vectors and the average subjective score difference values of all the distorted stereo images form a second training sample data set; then, a support vector regression is adopted as a machine learning method to train all image quality vectors in the second training sample data set, so that the error between a regression function value obtained through training and a subjective quality recommended value is minimum, and an optimal weight vector is obtained through fitting
Figure FDA00015154769400000213
And an optimal bias term
Figure FDA00015154769400000214
Then use
Figure FDA00015154769400000215
And
Figure FDA00015154769400000216
constructing a second quality prediction model, denoted as g2(y2),
Figure FDA00015154769400000217
Wherein, g2() In the form of a function, y2For representing an image quality vector, and as an input vector to a second quality prediction model,
Figure FDA00015154769400000218
is composed of
Figure FDA00015154769400000219
The transpose of (a) is performed,
Figure FDA00015154769400000220
is y2A linear function of (a);
also, will
Figure FDA00015154769400000221
The image quality vectors and the average subjective score difference values of all the distorted stereo images form a third training sample data set; then, a support vector regression is adopted as a machine learning method to train all image quality vectors in the third training sample data set, so that the error between a regression function value obtained through training and a subjective quality recommended value is minimum, and an optimal weight vector is obtained through fitting
Figure FDA00015154769400000222
And an optimal bias term
Figure FDA00015154769400000223
Then use
Figure FDA00015154769400000224
And
Figure FDA00015154769400000225
constructing a third quality prediction model, denoted as g3(y3),
Figure FDA0001515476940000031
Wherein, g3() In the form of a function, y3For representing an image quality vector, and as an input vector to a third quality prediction model,
Figure FDA0001515476940000032
is composed of
Figure FDA0001515476940000033
The transpose of (a) is performed,
Figure FDA0001515476940000034
is y3A linear function of (a);
① _4, calculation
Figure FDA0001515476940000035
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure FDA0001515476940000036
Will be the local phase image and the local amplitude image of each distorted stereo image
Figure FDA0001515476940000037
The local phase image and the local amplitude image are associated as
Figure FDA0001515476940000038
And
Figure FDA0001515476940000039
then will be
Figure FDA00015154769400000310
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure FDA00015154769400000311
And will be
Figure FDA00015154769400000312
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure FDA00015154769400000313
Also, calculate
Figure FDA00015154769400000314
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure FDA00015154769400000315
Each amplitude of the distortion isA local phase image and a local amplitude image of the volume image
Figure FDA00015154769400000316
The local phase image and the local amplitude image are associated as
Figure FDA00015154769400000317
And
Figure FDA00015154769400000318
then will be
Figure FDA00015154769400000319
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure FDA00015154769400000320
And will be
Figure FDA00015154769400000321
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure FDA00015154769400000322
Also, calculate
Figure FDA00015154769400000323
The local phase characteristic and the local amplitude characteristic of each pixel point in each distorted stereo image are obtained
Figure FDA00015154769400000324
Will be the local phase image and the local amplitude image of each distorted stereo image
Figure FDA00015154769400000325
The local phase image and the local amplitude image are associated as
Figure FDA00015154769400000326
And
Figure FDA00015154769400000327
then will be
Figure FDA00015154769400000328
The set of local phase images of all distorted stereo images in (1) is denoted as
Figure FDA00015154769400000329
And will be
Figure FDA00015154769400000330
The set of local amplitude image components of all distorted stereo images in (1) is expressed as
Figure FDA00015154769400000331
① _5, will
Figure FDA00015154769400000332
Each local phase image of (1) and
Figure FDA0001515476940000041
each local amplitude image in (1) is divided into
Figure FDA0001515476940000042
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure FDA0001515476940000043
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure FDA0001515476940000044
All local phase images in (1)The image feature vector formed by the pixel values of all the pixel points in the kth sub-block in sequence is recorded as
Figure FDA0001515476940000045
And will be
Figure FDA0001515476940000046
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure FDA0001515476940000047
The image feature vector formed by the pixel values of all the pixel points in the k-th sub-block in all the local amplitude images in sequence is recorded as
Figure FDA0001515476940000048
Then will be
Figure FDA0001515476940000049
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure FDA00015154769400000410
And will be
Figure FDA00015154769400000411
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure FDA00015154769400000412
Wherein, the symbol
Figure FDA00015154769400000413
Is a sign of a lower rounding operation, k is more than or equal to 1 and less than or equal to M,
Figure FDA00015154769400000414
Figure FDA00015154769400000415
and
Figure FDA00015154769400000416
the dimensions of (A) are all 64 x 1;
also, will
Figure FDA00015154769400000417
Each local phase image of (1) and
Figure FDA00015154769400000418
each local amplitude image in (1) is divided into
Figure FDA00015154769400000419
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure FDA00015154769400000420
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure FDA00015154769400000421
The image feature vector formed by the pixel values of all the pixel points in the kth sub-block in all the local phase images in sequence is recorded as
Figure FDA00015154769400000422
And will be
Figure FDA00015154769400000423
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure FDA00015154769400000424
All local oscillations inThe image feature vector formed by the pixel values of all pixel points in the kth sub-block in the frame image in sequence is recorded as
Figure FDA00015154769400000425
Then will be
Figure FDA00015154769400000426
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure FDA0001515476940000051
And will be
Figure FDA0001515476940000052
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure FDA0001515476940000053
Wherein the content of the first and second substances,
Figure FDA0001515476940000054
and
Figure FDA0001515476940000055
the dimensions of (A) are all 64 x 1;
also, will
Figure FDA0001515476940000056
Each local phase image of (1) and
Figure FDA0001515476940000057
each local amplitude image in (1) is divided into
Figure FDA0001515476940000058
Sub-blocks with size of 8 × 8 and not overlapped with each other; then will be
Figure FDA0001515476940000059
The pixel values of all pixel points in each sub-block in each local phase image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure FDA00015154769400000510
The image feature vector formed by the pixel values of all the pixel points in the kth sub-block in all the local phase images in sequence is recorded as
Figure FDA00015154769400000511
And will be
Figure FDA00015154769400000512
The pixel values of all pixel points in each sub-block in each local amplitude image form the image characteristic vector of the sub-block in sequence, and the image characteristic vector is obtained
Figure FDA00015154769400000513
The image feature vector formed by the pixel values of all the pixel points in the k-th sub-block in all the local amplitude images in sequence is recorded as
Figure FDA00015154769400000514
Then will be
Figure FDA00015154769400000515
The set of image feature vectors of the sub-blocks in all the local phase images in (1) is denoted as
Figure FDA00015154769400000516
And will be
Figure FDA00015154769400000517
The set of image feature vectors of the sub-blocks in all the local amplitude images in (1) is expressed as
Figure FDA00015154769400000518
Wherein the content of the first and second substances,
Figure FDA00015154769400000519
and
Figure FDA00015154769400000520
the dimensions of (A) are all 64 x 1;
① _6, will
Figure FDA00015154769400000521
Each distorted stereo image division in (1)
Figure FDA00015154769400000522
Sub-blocks with size of 8 × 8 and not overlapped with each other; then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure FDA00015154769400000523
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure FDA00015154769400000524
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure FDA00015154769400000525
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y1,k(ii) a Then will be
Figure FDA0001515476940000061
The set of image quality vectors of all the subblocks in all the distorted stereoscopic images in (1) is denoted by { y }1,kL 1 is more than or equal to k and less than or equal to M }; wherein, y1,kHas a dimension of 6 × 1;
also, will
Figure FDA0001515476940000062
Each distorted stereo image division in (1)
Figure FDA0001515476940000063
Sub-blocks with size of 8 × 8 and not overlapped with each other; then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure FDA0001515476940000064
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure FDA0001515476940000065
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure FDA0001515476940000066
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y2,k(ii) a Then will be
Figure FDA0001515476940000067
The set of image quality vectors of all the subblocks in all the distorted stereoscopic images in (1) is denoted by { y }2,kL 1 is more than or equal to k and less than or equal to M }; wherein, y2,kHas a dimension of 6 × 1;
also, will
Figure FDA0001515476940000068
Each distorted stereo image division in (1)
Figure FDA0001515476940000069
Sub-blocks with size of 8 × 8 and not overlapped with each other; then respectively obtaining the images by adopting 6 different full reference image quality evaluation methods
Figure FDA00015154769400000610
The objective evaluation prediction value of each subblock in each distorted stereo image is obtained; then will be
Figure FDA00015154769400000611
The 6 objective evaluation predicted values of each sub-block in each distorted stereo image form the image quality vector of the sub-block in sequence, and the image quality vector of the sub-block is obtained
Figure FDA00015154769400000612
The image quality vector formed by the 6 objective evaluation predicted values of the kth sub-block in all the distorted stereo images in sequence is recorded as y3,k(ii) a Then will be
Figure FDA00015154769400000613
The set of image quality vectors of all the subblocks in all the distorted stereoscopic images in (1) is denoted by { y }3,kL 1 is more than or equal to k and less than or equal to M }; wherein, y3,kHas a dimension of 6 × 1;
① _7, pair of bags by K-SVD method
Figure FDA00015154769400000614
{y1,k|1≤k≤M}、{y2,kL 1 is less than or equal to k is less than or equal to M and y3,kThe aggregate formed by |1 ≦ k ≦ M } is subjected to the joint dictionary training operation, and the structure is obtained
Figure FDA00015154769400000615
Respective image feature dictionary table and image quality dictionary table, corresponding
Figure FDA0001515476940000071
And
Figure FDA0001515476940000072
wherein the content of the first and second substances,
Figure FDA0001515476940000073
and
Figure FDA0001515476940000074
the dimensions of (a) are all 64 x K,
Figure FDA0001515476940000075
and
Figure FDA0001515476940000076
the dimensions of the dictionary are 6 XK, K represents the number of the set dictionaries, and K is more than or equal to 1;
also, the K-SVD method is adopted to pair
Figure FDA0001515476940000077
{y1,k|1≤k≤M}、{y2,kL 1 is less than or equal to k is less than or equal to M and y3,kThe aggregate formed by |1 ≦ k ≦ M } is subjected to the joint dictionary training operation, and the structure is obtained
Figure FDA0001515476940000078
And
Figure FDA0001515476940000079
respective image feature dictionary table and image quality dictionary table, corresponding
Figure FDA00015154769400000710
Figure FDA00015154769400000711
And
Figure FDA00015154769400000712
wherein the content of the first and second substances,
Figure FDA00015154769400000713
and
Figure FDA00015154769400000714
the dimensions of (a) are all 64 x K,
Figure FDA00015154769400000715
and
Figure FDA00015154769400000716
the dimensions of (A) are all 6 XK;
the specific steps of the test phase process are as follows:
② _1, for any test stereo image S with width W' and height HtestWill StestIs recorded as LtestWill StestIs recorded as Rtest(ii) a Wherein W 'is the same as or different from W, and H' is the same as or different from H;
② _2, obtaining S in the same operation as step ① _4test、LtestAnd RtestRespective local phase image and local amplitude image, and converting L into LtestThe local phase image and the local amplitude image are associated as
Figure FDA00015154769400000717
And
Figure FDA00015154769400000718
r is to betestThe local phase image and the local amplitude image are associated as
Figure FDA00015154769400000719
And
Figure FDA00015154769400000720
② _3, will
Figure FDA00015154769400000721
And
Figure FDA00015154769400000722
are respectively divided into
Figure FDA00015154769400000723
A size not overlapping each otherSubblocks of size 8 × 8; then will be
Figure FDA00015154769400000724
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure FDA00015154769400000725
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure FDA00015154769400000726
And will be
Figure FDA00015154769400000727
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure FDA00015154769400000728
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure FDA00015154769400000729
Will be provided with
Figure FDA00015154769400000730
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure FDA0001515476940000081
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure FDA0001515476940000082
Will be provided with
Figure FDA0001515476940000083
The pixel values of all pixel points in each sub-block in the image data are sequentially combined into an image characteristic vector of the sub-block, and the image characteristic vector is to be obtained
Figure FDA0001515476940000084
The image feature vector formed by the pixel values of all the pixel points in the t-th sub-block in sequence is recorded as
Figure FDA0001515476940000085
Then will be
Figure FDA0001515476940000086
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure FDA0001515476940000087
And will be
Figure FDA0001515476940000088
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure FDA0001515476940000089
Will be provided with
Figure FDA00015154769400000810
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure FDA00015154769400000811
Will be provided with
Figure FDA00015154769400000812
The set of image feature vectors of all sub-blocks in (1) is denoted as
Figure FDA00015154769400000813
Wherein the content of the first and second substances,
Figure FDA00015154769400000814
Figure FDA00015154769400000815
and
Figure FDA00015154769400000816
the dimensions of (A) are all 64 x 1;
② _4 constructed according to the procedure in the training phase
Figure FDA00015154769400000817
Separately optimized reconstruction
Figure FDA00015154769400000818
And
Figure FDA00015154769400000819
a first sparse coefficient matrix for each respective image feature vector
Figure FDA00015154769400000820
Is recorded as a first sparse coefficient matrix
Figure FDA00015154769400000821
Is solved by adopting a K-SVD method
Figure FDA00015154769400000822
Obtained by
Figure FDA00015154769400000823
Is recorded as a first sparse coefficient matrix
Figure FDA00015154769400000824
Is solved by adopting a K-SVD method
Figure FDA00015154769400000825
Obtaining;
also constructed from the process during the training phase
Figure FDA00015154769400000826
Separately optimized reconstruction
Figure FDA00015154769400000827
And
Figure FDA00015154769400000828
a second sparse coefficient matrix for each respective image feature vector
Figure FDA00015154769400000829
Is recorded as a second sparse coefficient matrix
Figure FDA00015154769400000830
Is solved by adopting a K-SVD method
Figure FDA00015154769400000831
Obtained by
Figure FDA00015154769400000832
Is recorded as a second sparse coefficient matrix
Figure FDA00015154769400000833
Is solved by adopting a K-SVD method
Figure FDA0001515476940000091
Obtaining;
also constructed from the process during the training phase
Figure FDA0001515476940000092
Separately optimized reconstruction
Figure FDA0001515476940000093
And
Figure FDA0001515476940000094
a third sparse coefficient matrix for each respective image feature vector
Figure FDA0001515476940000095
Is recorded as a third sparse coefficient matrix
Figure FDA0001515476940000096
Is solved by adopting a K-SVD method
Figure FDA0001515476940000097
Obtained by
Figure FDA0001515476940000098
Is recorded as a third sparse coefficient matrix
Figure FDA0001515476940000099
Is solved by adopting a K-SVD method
Figure FDA00015154769400000910
Obtaining;
also constructed from the process during the training phase
Figure FDA00015154769400000911
Separately optimized reconstruction
Figure FDA00015154769400000912
And
Figure FDA00015154769400000913
a first sparse coefficient matrix for each respective image feature vector
Figure FDA00015154769400000914
Is recorded as a first sparse coefficient matrix
Figure FDA00015154769400000915
Is solved by adopting a K-SVD method
Figure FDA00015154769400000916
Obtained by
Figure FDA00015154769400000917
Is recorded as a first sparse coefficient matrix
Figure FDA00015154769400000918
Is solved by adopting a K-SVD method
Figure FDA00015154769400000919
Obtaining;
also constructed from the process during the training phase
Figure FDA00015154769400000920
Separately optimized reconstruction
Figure FDA00015154769400000921
And
Figure FDA00015154769400000922
a second sparse coefficient matrix for each respective image feature vector
Figure FDA00015154769400000923
Is expressed as a sparse coefficient matrix
Figure FDA00015154769400000924
Is solved by adopting a K-SVD method
Figure FDA00015154769400000925
Obtained by
Figure FDA00015154769400000926
Is recorded as a second sparse coefficient matrix
Figure FDA00015154769400000927
Is solved by adopting a K-SVD method
Figure FDA00015154769400000928
Obtaining;
also constructed from the process during the training phase
Figure FDA00015154769400000929
Separately optimized reconstruction
Figure FDA00015154769400000930
And
Figure FDA0001515476940000101
a third sparse coefficient matrix for each respective image feature vector
Figure FDA0001515476940000102
Is recorded as a third sparse coefficient matrix
Figure FDA0001515476940000103
Is solved by adopting a K-SVD method
Figure FDA0001515476940000104
Obtained by
Figure FDA0001515476940000105
Is recorded as a third sparse coefficient matrix
Figure FDA0001515476940000106
Is solved by adopting a K-SVD method
Figure FDA0001515476940000107
Obtaining;
wherein the content of the first and second substances,
Figure FDA0001515476940000108
and
Figure FDA0001515476940000109
all dimensions of (a) are Kx 1, min () is a minimum function, and the symbol "| | | | non-conducting filamentF"is a Flobenius norm-norm symbol for solving the matrix, the symbol" | | | | | | luminance1"is the 1-norm symbol of the matrix, λ is the Lagrange parameter;
② _5 constructed according to the procedure in the training phase
Figure FDA00015154769400001010
Estimate separately
Figure FDA00015154769400001011
And
Figure FDA00015154769400001012
the first image quality vector of each sub-block in each will
Figure FDA00015154769400001013
The first image quality vector of the t sub-block of (1) is noted
Figure FDA00015154769400001014
Figure FDA00015154769400001015
Will be provided with
Figure FDA00015154769400001016
The first image quality vector of the t sub-block of (1) is noted
Figure FDA00015154769400001017
Also constructed from the process during the training phase
Figure FDA00015154769400001018
Estimate separately
Figure FDA00015154769400001019
And
Figure FDA00015154769400001020
the second image quality vector of each sub-block in each will
Figure FDA00015154769400001021
The second image quality vector of the t sub-block of (1)
Figure FDA00015154769400001022
Figure FDA00015154769400001023
Will be provided with
Figure FDA00015154769400001024
The second image quality vector of the t sub-block of (1)
Figure FDA00015154769400001025
Also constructed from the process during the training phase
Figure FDA00015154769400001026
Estimate separately
Figure FDA00015154769400001027
And
Figure FDA00015154769400001028
the third image quality vector of each sub-block in each
Figure FDA00015154769400001029
The third image quality vector of the t sub-block of (1)
Figure FDA00015154769400001030
Figure FDA00015154769400001031
Will be provided with
Figure FDA00015154769400001032
The third image quality vector of the t sub-block of (1)
Figure FDA00015154769400001033
Figure FDA00015154769400001034
Also constructed from the process during the training phase
Figure FDA00015154769400001035
Estimate separately
Figure FDA00015154769400001036
And
Figure FDA00015154769400001037
the first image quality vector of each sub-block in each will
Figure FDA00015154769400001038
The first image quality vector of the t sub-block of (1) is noted
Figure FDA00015154769400001039
Figure FDA00015154769400001040
Will be provided with
Figure FDA00015154769400001041
The first image quality vector of the t sub-block of (1) is noted
Figure FDA00015154769400001042
Figure FDA00015154769400001043
Also constructed from the process during the training phase
Figure FDA00015154769400001044
Estimate separately
Figure FDA00015154769400001045
And
Figure FDA00015154769400001046
the second image quality vector of each sub-block in each will
Figure FDA00015154769400001047
The second image quality vector of the t sub-block of (1)
Figure FDA00015154769400001048
Figure FDA0001515476940000111
Will be provided with
Figure FDA0001515476940000112
The second image quality vector of the t sub-block of (1)
Figure FDA0001515476940000113
Figure FDA0001515476940000114
Also constructed from the process during the training phase
Figure FDA0001515476940000115
Are respectively provided withEstimating
Figure FDA0001515476940000116
And
Figure FDA0001515476940000117
the third image quality vector of each sub-block in each
Figure FDA0001515476940000118
The third image quality vector of the t sub-block of (1)
Figure FDA0001515476940000119
Figure FDA00015154769400001110
Will be provided with
Figure FDA00015154769400001111
The third image quality vector of the t sub-block of (1)
Figure FDA00015154769400001112
Figure FDA00015154769400001113
Wherein the content of the first and second substances,
Figure FDA00015154769400001114
and
Figure FDA00015154769400001115
the dimensions of (A) are all 6 x 1;
② _6, calculation
Figure FDA00015154769400001116
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure FDA00015154769400001117
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure FDA00015154769400001118
And
Figure FDA00015154769400001119
wherein exp () represents an exponential function with a natural base e as a base, and the symbol "| | | | purple2"is the 2-norm sign of the matrix, η is the control parameter,
Figure FDA00015154769400001120
is composed of
Figure FDA00015154769400001121
The input vector of (1);
also, calculate
Figure FDA00015154769400001122
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure FDA00015154769400001123
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure FDA00015154769400001124
And
Figure FDA00015154769400001125
Figure FDA00015154769400001126
Figure FDA0001515476940000121
wherein the content of the first and second substances,
Figure FDA0001515476940000122
is composed of
Figure FDA0001515476940000123
The input vector of (1);
also, calculate
Figure FDA0001515476940000124
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure FDA0001515476940000125
The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the t sub-block in the sequence are recorded as
Figure FDA0001515476940000126
And
Figure FDA0001515476940000127
Figure FDA0001515476940000128
wherein the content of the first and second substances,
Figure FDA0001515476940000129
is composed of
Figure FDA00015154769400001210
The input vector of (1);
also, calculate
Figure FDA00015154769400001211
And the multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality of each sub-block in the image processing system
Figure FDA00015154769400001212
T th of (1)The multi-distortion fusion sparse coefficient matrix and the multi-distortion fusion image quality correspondence of the sub-blocks are recorded as
Figure FDA00015154769400001213
And
Figure FDA00015154769400001214
Figure FDA00015154769400001215
Figure FDA00015154769400001216
wherein the content of the first and second substances,
Figure FDA00015154769400001217
is composed of
Figure FDA00015154769400001218
The input vector of (1);
② _7, calculation
Figure FDA00015154769400001219
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure FDA00015154769400001220
And QL,P
Figure FDA0001515476940000131
Also, calculate
Figure FDA0001515476940000132
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure FDA0001515476940000133
And QR,P
Figure FDA0001515476940000134
Also, calculate
Figure FDA0001515476940000135
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure FDA0001515476940000136
And QL,A
Figure FDA0001515476940000137
Also, calculate
Figure FDA0001515476940000138
And the global sparse coefficient matrix and the global image quality are correspondingly recorded as
Figure FDA0001515476940000139
And QR,A
Figure FDA00015154769400001310
② _8, according to
Figure FDA00015154769400001311
And
Figure FDA00015154769400001312
and QL,PAnd QR,PCalculating StestThe predicted value of the objective quality evaluation of the local phase image is recorded as QP,QP=ωL,P×QL,PR,P×QR,P(ii) a Wherein, ω isL,PIs QL,PThe weight of (a) is calculated,
Figure FDA00015154769400001313
ωR,Pis QR,PThe weight of (a) is calculated,
Figure FDA00015154769400001314
symbol'<>"is the inner product symbol, C is the control parameter;
also according to
Figure FDA0001515476940000141
And
Figure FDA0001515476940000142
and QL,AAnd QR,ACalculating StestThe predicted value of the objective evaluation of the quality of the local amplitude image is marked as QA,QA=ωL,A×QL,AR,A×QR,A(ii) a Wherein, ω isL,AIs QL,AThe weight of (a) is calculated,
Figure FDA0001515476940000143
ωR,Ais QR,AThe weight of (a) is calculated,
Figure FDA0001515476940000144
② _9, according to QPAnd QACalculating StestThe predicted value of the objective evaluation of image quality is expressed as Q, Q ═ ω (ω)P×(QP)n+(1-ωP)×(QA)n)1/n(ii) a Wherein, ω isPAnd n are weighting parameters.
2. The objective evaluation method for quality of asymmetric multi-distortion stereo images according to claim 1, wherein in the step ① _4,
Figure FDA0001515476940000145
and
Figure FDA0001515476940000146
the acquisition process comprises the following steps:
① _4a, using Log-Gabor filter pairs
Figure FDA0001515476940000147
Each pixel point in the image is filtered to obtain
Figure FDA0001515476940000148
The even symmetric frequency response and the odd symmetric frequency response of each pixel point in different scales and directions will be
Figure FDA0001515476940000149
The even symmetric frequency response of the pixel point with the middle coordinate position (x, y) in different scales and directions is recorded as eα,θ(x, y) is
Figure FDA00015154769400001410
The odd symmetric frequency response of the pixel point with the middle coordinate position (x, y) in different scales and directions is recorded as oα,θ(x, y), wherein x is more than or equal to 1 and less than or equal to W, y is more than or equal to 1 and less than or equal to H, α represents the scale factor of the Log-Gabor filter,
Figure FDA00015154769400001411
Figure FDA00015154769400001412
theta denotes a direction factor of the Log-Gabor filter,
Figure FDA00015154769400001413
Figure FDA00015154769400001414
① _4b, calculation
Figure FDA00015154769400001415
The phase consistency characteristics of each pixel point in different directions are
Figure FDA00015154769400001416
The phase consistency characteristics of the pixel points with the (x, y) middle coordinate position in different directions are recorded as PCθ(x,y),
Figure FDA00015154769400001417
Wherein the content of the first and second substances,
Figure FDA00015154769400001418
Figure FDA00015154769400001419
① _4c, according to
Figure FDA0001515476940000151
The direction corresponding to the maximum phase consistency characteristic of each pixel point in the image is calculated
Figure FDA0001515476940000152
The local phase characteristic and the local amplitude characteristic of each pixel point in the image; for the
Figure FDA0001515476940000153
And (3) finding out the maximum phase consistency characteristic of the pixel point with the (x, y) middle coordinate position in the phase consistency characteristics in different directions, finding out the direction corresponding to the maximum phase consistency characteristic, and marking as thetamAgain according to thetamCalculating the local phase characteristic and the local amplitude characteristic of the pixel point, and correspondingly marking as
Figure FDA0001515476940000154
And
Figure FDA0001515476940000155
Figure FDA0001515476940000156
Figure FDA0001515476940000157
wherein the content of the first and second substances,
Figure FDA0001515476940000158
arctan () is an inverted cosine function,
Figure FDA0001515476940000159
Figure FDA00015154769400001510
to represent
Figure FDA00015154769400001511
The pixel point with the middle coordinate position (x, y) is in the direction theta corresponding to the different scales and the maximum phase consistency characteristics of the pixel pointmThe odd-symmetric frequency response of (a),
Figure FDA00015154769400001512
Figure FDA00015154769400001513
to represent
Figure FDA00015154769400001514
The pixel point with the middle coordinate position (x, y) is in the direction theta corresponding to the different scales and the maximum phase consistency characteristics of the pixel pointmThe even-symmetric frequency response of the frequency domain,
Figure FDA00015154769400001515
① _4d, according to
Figure FDA00015154769400001516
The local phase characteristics of all the pixel points in the image are obtained
Figure FDA00015154769400001517
Local phase image of
Figure FDA00015154769400001518
Also according to
Figure FDA00015154769400001519
Obtaining the local amplitude characteristics of all the pixel points in the image
Figure FDA00015154769400001520
Local amplitude image of
Figure FDA00015154769400001521
Obtaining according to steps ① _4a through ① _4d
Figure FDA00015154769400001522
And
Figure FDA00015154769400001523
in the same manner as the procedure of (1)
Figure FDA00015154769400001524
And
Figure FDA00015154769400001525
and
Figure FDA00015154769400001526
3. the objective evaluation method for quality of asymmetric multi-distortion stereo images according to claim 1 or 2, wherein in the step ① _7,
Figure FDA00015154769400001527
and
Figure FDA00015154769400001528
is solved by adopting a K-SVD method
Figure FDA00015154769400001529
Obtained, where min () is a minimum function, the symbol "| | | | luminanceF"is a Flobenius norm-norm symbol for solving the matrix, the symbol" | | | | | | luminance1"is to find the 1-norm sign of the matrix, s is more than or equal to 1 and less than or equal to 3,
Figure FDA00015154769400001530
Figure FDA00015154769400001531
Figure FDA00015154769400001532
and
Figure FDA00015154769400001533
the dimensions of (a) are all 64 x M,
Figure FDA00015154769400001534
is composed of
Figure FDA00015154769400001535
The 1 st first image feature vector in (a),
Figure FDA00015154769400001536
is composed of
Figure FDA00015154769400001537
The kth first image feature vector in (1),
Figure FDA00015154769400001538
is composed of
Figure FDA0001515476940000161
The mth first image feature vector in (1),
Figure FDA0001515476940000162
is composed of
Figure FDA0001515476940000163
The 1 st first image feature vector in (a),
Figure FDA0001515476940000164
is composed of
Figure FDA0001515476940000165
The kth first image feature vector in (1),
Figure FDA0001515476940000166
is composed of
Figure FDA0001515476940000167
The mth first image feature vector in (1),
Figure FDA0001515476940000168
is composed of
Figure FDA0001515476940000169
The 1 st first image feature vector in (a),
Figure FDA00015154769400001610
is composed of
Figure FDA00015154769400001611
The kth first image feature vector in (1),
Figure FDA00015154769400001612
is composed of
Figure FDA00015154769400001613
M-th first image feature vector, Y1=[y1,1…y1,k…y1,M],Y2=[y2,1…y2,k…y2,M],Y3=[y3,1…y3,k…y3,M],Y1、Y2And Y3All dimensions of (are 6)M,y1,1Is { y1,k1 st image quality vector, y in |1 ≦ k ≦ M }1,kIs { y1,kK-th image quality vector in |1 ≦ k ≦ M ≦ y1,MIs { y1,kMth image quality vector in |1 ≦ k ≦ M ≦ y2,1Is { y2,k1 st image quality vector, y in |1 ≦ k ≦ M }2,kIs { y2,kK-th image quality vector in |1 ≦ k ≦ M ≦ y2,MIs { y2,kMth image quality vector in |1 ≦ k ≦ M ≦ y3,1Is { y3,k1 st image quality vector, y in |1 ≦ k ≦ M }3,kIs { y3,kK-th image quality vector in |1 ≦ k ≦ M ≦ y3,MIs { y3,kThe Mth image quality vector in |1 ≦ k ≦ M },
Figure FDA00015154769400001614
and
Figure FDA00015154769400001615
each of which represents a sparse matrix and each of which represents,
Figure FDA00015154769400001616
Figure FDA00015154769400001617
Figure FDA00015154769400001618
and
Figure FDA00015154769400001619
the dimensions of (A) are all K multiplied by M,
Figure FDA00015154769400001620
is composed of
Figure FDA00015154769400001621
The 1 st column vector of (1),
Figure FDA00015154769400001622
is composed of
Figure FDA00015154769400001623
The k-th column vector of (a),
Figure FDA00015154769400001624
is composed of
Figure FDA00015154769400001625
The M-th column vector of (1),
Figure FDA00015154769400001626
is composed of
Figure FDA00015154769400001627
The 1 st column vector of (1),
Figure FDA00015154769400001628
is composed of
Figure FDA00015154769400001629
The k-th column vector of (a),
Figure FDA00015154769400001630
is composed of
Figure FDA00015154769400001631
The M-th column vector of (1),
Figure FDA00015154769400001632
is composed of
Figure FDA00015154769400001633
The 1 st column vector of (1),
Figure FDA00015154769400001634
is composed of
Figure FDA00015154769400001635
The k-th column vector of (a),
Figure FDA00015154769400001636
is composed of
Figure FDA00015154769400001637
The M-th column vector of (1),
Figure FDA00015154769400001638
the dimension of (A) is K x 1, symbol [ "]]"is a vector representation symbol, γ is a weighting parameter, and λ is a lagrangian parameter;
in the step ① _7, the user can,
Figure FDA00015154769400001639
and
Figure FDA00015154769400001640
is solved by adopting a K-SVD method
Figure FDA00015154769400001641
The process for preparing a novel compound of formula (I),
Figure FDA00015154769400001642
Figure FDA00015154769400001643
and
Figure FDA0001515476940000171
the dimensions of (a) are all 64 x M,
Figure FDA0001515476940000172
is composed of
Figure FDA0001515476940000173
The 1 st second image feature vector in (b),
Figure FDA0001515476940000174
is composed of
Figure FDA0001515476940000175
The kth second image feature vector in (b),
Figure FDA0001515476940000176
is composed of
Figure FDA0001515476940000177
The mth second image feature vector in (1),
Figure FDA0001515476940000178
is composed of
Figure FDA0001515476940000179
The 1 st second image feature vector in (b),
Figure FDA00015154769400001710
is composed of
Figure FDA00015154769400001711
The kth second image feature vector in (b),
Figure FDA00015154769400001712
is composed of
Figure FDA00015154769400001713
The mth second image feature vector in (1),
Figure FDA00015154769400001714
is composed of
Figure FDA00015154769400001715
The 1 st second image feature vector in (b),
Figure FDA00015154769400001716
is composed of
Figure FDA00015154769400001717
The kth second image feature vector in (b),
Figure FDA00015154769400001718
is composed of
Figure FDA00015154769400001719
The mth second image feature vector in (1),
Figure FDA00015154769400001720
and
Figure FDA00015154769400001721
each of which represents a sparse matrix and each of which represents,
Figure FDA00015154769400001722
Figure FDA00015154769400001723
Figure FDA00015154769400001724
and
Figure FDA00015154769400001725
the dimensions of (A) are all K multiplied by M,
Figure FDA00015154769400001726
is composed of
Figure FDA00015154769400001727
The 1 st column vector of (1),
Figure FDA00015154769400001728
is composed of
Figure FDA00015154769400001729
The k-th column vector of (a),
Figure FDA00015154769400001730
is composed of
Figure FDA00015154769400001731
The M-th column vector of (1),
Figure FDA00015154769400001732
is composed of
Figure FDA00015154769400001733
The 1 st column vector of (1),
Figure FDA00015154769400001734
is composed of
Figure FDA00015154769400001735
The k-th column vector of (a),
Figure FDA00015154769400001736
is composed of
Figure FDA00015154769400001737
The M-th column vector of (1),
Figure FDA00015154769400001738
is composed of
Figure FDA00015154769400001739
The 1 st column vector of (1),
Figure FDA00015154769400001740
is composed of
Figure FDA00015154769400001741
The k-th column vector of (a),
Figure FDA00015154769400001742
is composed of
Figure FDA00015154769400001743
The M-th column vector of (1),
Figure FDA00015154769400001744
the dimensions of (A) are each K × 1.
CN201711380389.7A 2017-12-20 2017-12-20 Objective evaluation method for quality of asymmetric multi-distortion stereo image Active CN108460752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711380389.7A CN108460752B (en) 2017-12-20 2017-12-20 Objective evaluation method for quality of asymmetric multi-distortion stereo image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711380389.7A CN108460752B (en) 2017-12-20 2017-12-20 Objective evaluation method for quality of asymmetric multi-distortion stereo image

Publications (2)

Publication Number Publication Date
CN108460752A CN108460752A (en) 2018-08-28
CN108460752B true CN108460752B (en) 2020-04-10

Family

ID=63221249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711380389.7A Active CN108460752B (en) 2017-12-20 2017-12-20 Objective evaluation method for quality of asymmetric multi-distortion stereo image

Country Status (1)

Country Link
CN (1) CN108460752B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243385A (en) * 2015-09-23 2016-01-13 宁波大学 Unsupervised learning based image quality evaluation method
CN105894522A (en) * 2016-04-28 2016-08-24 宁波大学 Multi-distortion stereo image quality objective evaluation method
CN107371016A (en) * 2017-07-25 2017-11-21 天津大学 Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243385A (en) * 2015-09-23 2016-01-13 宁波大学 Unsupervised learning based image quality evaluation method
CN105894522A (en) * 2016-04-28 2016-08-24 宁波大学 Multi-distortion stereo image quality objective evaluation method
CN107371016A (en) * 2017-07-25 2017-11-21 天津大学 Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Learning Blind Quality Evaluator for Stereoscopic Images Using Joint Sparse Representation;Feng Shao 等;《IEEE TRANSACTIONS ON MULTIMEDIA》;20160727;全文 *
基于小波图像融合的非对称失真立体图像质量评价方法;周武杰 等;《光电工程》;20111130;全文 *

Also Published As

Publication number Publication date
CN108460752A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
CN108428227B (en) No-reference image quality evaluation method based on full convolution neural network
CN104902267B (en) No-reference image quality evaluation method based on gradient information
CN105574901B (en) A kind of general non-reference picture quality appraisement method based on local contrast pattern
CN102547368B (en) Objective evaluation method for quality of stereo images
CN102333233A (en) Stereo image quality objective evaluation method based on visual perception
CN102209257A (en) Stereo image quality objective evaluation method
CN105894522B (en) A kind of more distortion objective evaluation method for quality of stereo images
CN105282543B (en) Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN105243385B (en) A kind of image quality evaluating method based on unsupervised learning
CN104902268B (en) Based on local tertiary mode without with reference to three-dimensional image objective quality evaluation method
CN105357519B (en) Quality objective evaluation method for three-dimensional image without reference based on self-similarity characteristic
CN110717892B (en) Tone mapping image quality evaluation method
CN104954778A (en) Objective stereo image quality assessment method based on perception feature set
CN106210711B (en) One kind is without with reference to stereo image quality evaluation method
CN103914835B (en) A kind of reference-free quality evaluation method for fuzzy distortion stereo-picture
CN106023152B (en) It is a kind of without with reference to objective evaluation method for quality of stereo images
CN106960432B (en) A kind of no reference stereo image quality evaluation method
CN105898279B (en) A kind of objective evaluation method for quality of stereo images
CN106683079A (en) No-reference image objective quality evaluation method based on structural distortion
CN104103065A (en) No-reference fuzzy image quality evaluation method based on singular value decomposition
CN104835172A (en) No-reference image quality evaluation method based on phase consistency and frequency domain entropy
CN105069794A (en) Binocular rivalry based totally blind stereo image quality evaluation method
CN108460752B (en) Objective evaluation method for quality of asymmetric multi-distortion stereo image
CN104616310B (en) The appraisal procedure and device of a kind of picture quality
CN107274388A (en) It is a kind of based on global information without refer to screen image quality evaluating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230901

Address after: 230000 Room 203, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Hefei Jiuzhou Longteng scientific and technological achievement transformation Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20230901

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240109

Address after: No. 111, Building 106, Demai International Information Industry Center, No. 3000 Meili East Road, Huaiyin District, Jinan City, Shandong Province, 250000

Patentee after: Jinan Fengzhi Test Instrument Co.,Ltd.

Address before: 230000 Room 203, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Hefei Jiuzhou Longteng scientific and technological achievement transformation Co.,Ltd.