CN110636282B - 一种无参考非对称虚拟视点立体视频质量评价方法 - Google Patents

一种无参考非对称虚拟视点立体视频质量评价方法 Download PDF

Info

Publication number
CN110636282B
CN110636282B CN201910905772.2A CN201910905772A CN110636282B CN 110636282 B CN110636282 B CN 110636282B CN 201910905772 A CN201910905772 A CN 201910905772A CN 110636282 B CN110636282 B CN 110636282B
Authority
CN
China
Prior art keywords
video
viewpoint
calculating
recorded
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910905772.2A
Other languages
English (en)
Other versions
CN110636282A (zh
Inventor
彭宗举
崔帅南
邹文辉
陈芬
郁梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dragon Totem Technology Hefei Co ltd
Xi'an Lingjing Chenxing Culture Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201910905772.2A priority Critical patent/CN110636282B/zh
Publication of CN110636282A publication Critical patent/CN110636282A/zh
Application granted granted Critical
Publication of CN110636282B publication Critical patent/CN110636282B/zh
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

本发明涉及一种无参考非对称虚拟视点立体视频质量评价方法,包括:分别对待评估虚拟视点立体视频的左视点视频和右视点视频的亮度通道中的每帧图像进行两次下采样,并根据下采样得到的3种尺度下对应的图像计算出左、右视点的空域局部特征和空域全局特征以及左、右视点的时域局部特征和时域全局特征;并分别组成左视点特征向量和右视点特征向量;同时,计算出左视点视频的权重和右视点视频的权重;将左视点特征向量和右视点特征向量与对应的左视点视频权重和右视点视频权重进行融合,得到立体特征向量;最后,将立体特征向量作为输入量,使用随机森林技术,计算得到待评估的虚拟视点立体视频的客观质量评价值。该评价方法更符合人眼主观感知。

Description

一种无参考非对称虚拟视点立体视频质量评价方法
技术领域
本发明涉及视频质量评价领域,特别涉及一种无参考非对称虚拟视点立体视频质量评价方法。
背景技术
当前,三维视频和自由视点视频(FVV)的应用越来越广泛,为用户提供了生动逼真的视觉体验。在传输过程中,受到网络带宽的限制,无法传输所有视点的视频。而多视点视频加深度(MVD)的表达方式只需对多个视点的彩***和深度视频编码,并且在解码端可以利用基于深度图的绘制(DIBR)技术生成多个虚拟视点。然而,大多数现有的基于DIBR的虚拟视点绘制方法仍不完善,合成的虚拟视点视频可能存在空洞、伪影和畸变等多种几何失真。而用户在观看FVV视频时可以从多个视角观看3D场景,在观看时进行视点切换的过程中存在左右视点为非对称的情况,即左(右)视点为原始视点,右(左)视点为绘制视点。现有的虚拟视点质量评价方法并没有考虑左右视点为非对称虚拟视点失真的情况,不能很好地反映人眼主观感知质量。因此,需要一种质量评价方法对左右视点为非对称失真的虚拟视点立体视频进行有效地判断。符合人眼主观感知的评价方法也可以作为反馈指导虚拟视点绘制算法来提高最终的虚拟视点视频质量。
近年来,质量评价问题受到越来越多研究者的关注,多种图像/视频质量评价方法已经被提出。主观评价可以以最直接的方式反映人眼主观感受,但需要耗费大量时间和精力,因此,需要一种客观评价方法替代人眼主观评价。根据是否需要参考原始图像/视频,客观质量评价方法可以分为三类,即全参考(FR)、半参考(RR)和无参考(NR)。FR和RR客观评价方法均需要参考原始图像/视频的全部或部分信息。NR评价方法评价图像/视频质量,不需要参考任何信息。和FR和RR评价方法相比,NR质量评价方法的研究更具挑战性。而利用DIBR技术得到的虚拟视点视频大多数不存在参考视频,因此需要引入针对虚拟视点视频的无参考视频质量评价方法。
发明内容
本发明所要解决的技术问题是针对现有技术的现状,提供一种能有效提高客观评价结果且保证与人眼主观感知一致性的无参考非对称虚拟视点立体视频质量评价方法。
本发明解决上述技术问题所采用的技术方案为:一种无参考非对称虚拟视点立体视频质量评价方法,其特征在于包括以下步骤:
步骤1、输入左右视点为非对称失真的待评估虚拟视点立体视频,并分别提取待评估虚拟视点立体视频的左视点视频VL和右视点视频VR,其中,左视点视频VL和右视点视频VR的宽度均为W,高度均为H,视频帧的总数均为N;
步骤2、分别提取左视点视频VL的亮度通道IL和右视点视频VR的亮度通道IR;其中,左视点视频的亮度通道IL和右视点视频的亮度通道IR中的第n帧图像分别记为
Figure BDA0002213233500000021
Figure BDA0002213233500000022
步骤3、分别对左视点视频的亮度通道IL和右视点视频的亮度通道IR中的每帧图像进行两次下采样,分别得到
Figure BDA0002213233500000023
Figure BDA0002213233500000024
一次下采样之后得到的图像和两次下采样之后得到的图像;并将
Figure BDA0002213233500000025
Figure BDA0002213233500000026
一次下采样之后得到的图像以及
Figure BDA0002213233500000027
两次下采样之后得到的图像对应为左视点视频的亮度通道在3种尺度下对应的图像,将
Figure BDA0002213233500000028
Figure BDA0002213233500000029
一次下采样之后得到的图像以及
Figure BDA00022132335000000210
两次下采样之后得到的图像对应为右视点视频的亮度通道在3种尺度下对应的图像,其中,左视点视频的亮度通道在第s种尺度下对应的第n帧图像记为
Figure BDA00022132335000000211
右视点视频的亮度通道在第s种尺度下对应的第n帧图像记为
Figure BDA00022132335000000212
步骤4、根据步骤3中的
Figure BDA00022132335000000213
Figure BDA00022132335000000214
分别计算得到左、右视点的空域局部特征
Figure BDA00022132335000000215
Figure BDA00022132335000000216
步骤5、分别计算
Figure BDA00022132335000000217
Figure BDA00022132335000000218
的任意相邻两帧图像之间的绝对差值,记为
Figure BDA00022132335000000219
Figure BDA00022132335000000220
Figure BDA00022132335000000221
其中,
Figure BDA00022132335000000222
Figure BDA00022132335000000223
的计算公式为:
Figure BDA00022132335000000224
abs表示取绝对值函数,
Figure BDA00022132335000000225
表示左视点视频的亮度通道在第s种尺度下的第k1帧图像,
Figure BDA00022132335000000226
表示左视点视频的亮度通道在第s种尺度下的第k1-1帧图像,k1=2、3、4…N;
Figure BDA00022132335000000227
表示右视点视频的亮度通道在第s种尺度下的第k1帧图像,
Figure BDA00022132335000000228
表示右视点视频的亮度通道在第s种尺度下的第k1-1帧图像;
步骤6、根据步骤5中的
Figure BDA00022132335000000229
Figure BDA00022132335000000230
分别计算出左、右视点的时域局部特征
Figure BDA00022132335000000231
Figure BDA00022132335000000232
步骤7、根据步骤3中的
Figure BDA00022132335000000233
Figure BDA00022132335000000234
分别计算左、右视点的空域全局特征
Figure BDA00022132335000000235
Figure BDA00022132335000000236
步骤8、根据步骤5中的
Figure BDA00022132335000000237
Figure BDA00022132335000000238
分别计算左、右视点的时域全局特征
Figure BDA00022132335000000239
Figure BDA0002213233500000031
步骤9、分别对
Figure BDA0002213233500000032
Figure BDA0002213233500000033
的每帧图像进行非下采样剪切波变换,分别得到一个低频子带图像和四个尺度、十个方向的高频子带图像;分别将
Figure BDA0002213233500000034
Figure BDA0002213233500000035
进行非下采样剪切波变换后得到的第4个尺度下第k个方向的剪切波高频子带图像对应记为
Figure BDA0002213233500000036
Figure BDA0002213233500000037
其中,k=1、2、3...、10;
步骤10、根据步骤9中的
Figure BDA0002213233500000038
Figure BDA0002213233500000039
分别计算出左视点视频的权重wL和右视点视频的权重wR
步骤11、将上述得到的左视点空域局部特征、时域局部特征、空域全局特征和时域全局特征组成左视点特征向量FL
Figure BDA00022132335000000310
将上述得到的右视点空域局部特征、时域局部特征、空域全局特征和时域全局特征组成右视点特征向量FR
Figure BDA00022132335000000311
步骤12、将左视点特征向量FL和右视点特征向量FR与步骤10中得到的左视点视频和右视点视频权重wL和wR对应进行融合,得到立体特征向量F,F=wL×FL+wR×FR
步骤13、将立体特征向量F作为输入量,使用随机森林技术,计算得到待评估的虚拟视点立体视频的客观质量评价值。
具体的,所述步骤4中的具体步骤为:
步骤4-1、使用大小为M1×M1的圆形结构元分别对
Figure BDA00022132335000000312
Figure BDA00022132335000000313
进行腐蚀;其中,M1为正整数;
步骤4-2、分别将
Figure BDA00022132335000000314
Figure BDA00022132335000000315
与对应的腐蚀后的每帧图像进行相减,分别得到
Figure BDA00022132335000000316
Figure BDA00022132335000000317
的形态学边缘图像,记为
Figure BDA00022132335000000318
Figure BDA00022132335000000319
步骤4-3、分别计算
Figure BDA00022132335000000320
Figure BDA00022132335000000321
水平方向的标准差的最大值,记为
Figure BDA00022132335000000322
Figure BDA00022132335000000323
h表示水平方向;
步骤4-4、计算
Figure BDA00022132335000000324
Figure BDA00022132335000000325
所有帧数的总和再取平均,记为
Figure BDA00022132335000000326
Figure BDA00022132335000000327
Figure BDA00022132335000000328
步骤4-5、分别计算
Figure BDA00022132335000000329
Figure BDA00022132335000000330
垂直方向的标准差的最大值,记为
Figure BDA00022132335000000331
Figure BDA00022132335000000332
v表示垂直方向;
步骤4-6、计算
Figure BDA00022132335000000333
Figure BDA00022132335000000334
所有帧数的总和再取平均,记为
Figure BDA00022132335000000335
Figure BDA00022132335000000336
Figure BDA0002213233500000041
步骤4-7、将
Figure BDA0002213233500000042
Figure BDA0002213233500000043
组成
Figure BDA0002213233500000044
Figure BDA0002213233500000045
Figure BDA0002213233500000046
组成
Figure BDA0002213233500000047
Figure BDA0002213233500000048
进一步的,所述步骤6中的具体步骤为:
步骤6-1、使用大小为M2×M2的圆形结构元分别对
Figure BDA0002213233500000049
Figure BDA00022132335000000410
进行腐蚀;其中,M2为正整数;
步骤6-2、分别将
Figure BDA00022132335000000411
Figure BDA00022132335000000412
与对应的腐蚀后的每帧图像进行相减,分别得到
Figure BDA00022132335000000413
Figure BDA00022132335000000414
的形态学边缘图像,记为
Figure BDA00022132335000000415
Figure BDA00022132335000000416
步骤6-3、分别计算
Figure BDA00022132335000000417
Figure BDA00022132335000000418
水平方向的标准差的最大值,记为
Figure BDA00022132335000000419
Figure BDA00022132335000000420
步骤6-4、计算
Figure BDA00022132335000000421
Figure BDA00022132335000000422
所有帧数的总和再取平均,记为
Figure BDA00022132335000000423
Figure BDA00022132335000000424
Figure BDA00022132335000000425
步骤6-5、分别计算
Figure BDA00022132335000000426
Figure BDA00022132335000000427
垂直方向的标准差的最大值,记为
Figure BDA00022132335000000428
Figure BDA00022132335000000429
步骤6-6、计算
Figure BDA00022132335000000430
Figure BDA00022132335000000431
所有帧数的总和再取平均,记为
Figure BDA00022132335000000432
Figure BDA00022132335000000433
Figure BDA00022132335000000434
步骤6-7、将
Figure BDA00022132335000000435
Figure BDA00022132335000000436
组成
Figure BDA00022132335000000437
Figure BDA00022132335000000438
Figure BDA00022132335000000439
组成
Figure BDA00022132335000000440
Figure BDA00022132335000000441
作为改进,所述步骤7的具体步骤为:
步骤7-1、分别对
Figure BDA00022132335000000442
Figure BDA00022132335000000443
进行去均值和对比度归一化(MSCN)处理,得到第一MSCN系数,分别记为
Figure BDA00022132335000000444
Figure BDA00022132335000000445
步骤7-2、利用非对称广义高斯分布分别对
Figure BDA00022132335000000446
Figure BDA00022132335000000447
进行直方图拟合,得到其对应的4个拟合参数,分别记为
Figure BDA00022132335000000448
Figure BDA00022132335000000449
其中,
Figure BDA00022132335000000450
分别指
Figure BDA00022132335000000451
的左、右方差参数;
Figure BDA00022132335000000452
分别指
Figure BDA00022132335000000453
的左、右方差参数;
步骤7-3、分别计算
Figure BDA00022132335000000454
Figure BDA00022132335000000455
所有帧数的总和再取平均,得到
Figure BDA00022132335000000456
Figure BDA00022132335000000457
计算公式如下:
Figure BDA0002213233500000051
Figure BDA0002213233500000052
步骤7-4、将
Figure BDA0002213233500000053
Figure BDA0002213233500000054
组成
Figure BDA0002213233500000055
Figure BDA0002213233500000056
Figure BDA0002213233500000057
Figure BDA0002213233500000058
组成
Figure BDA0002213233500000059
进一步的,所述步骤8的具体步骤为:
步骤8-1、分别对
Figure BDA00022132335000000510
Figure BDA00022132335000000511
进行去均值和对比度归一化(MSCN)处理,得到第二MSCN系数,分别记为
Figure BDA00022132335000000512
Figure BDA00022132335000000513
步骤8-2、利用非对称广义高斯分布分别对
Figure BDA00022132335000000514
Figure BDA00022132335000000515
进行直方图拟合,得到其对应的4个拟合参数,分别记为
Figure BDA00022132335000000516
Figure BDA00022132335000000517
其中,
Figure BDA00022132335000000518
分别指
Figure BDA00022132335000000519
的左、右方差参数;
Figure BDA00022132335000000520
分别指
Figure BDA00022132335000000521
的左、右方差参数;
步骤8-3、分别计算
Figure BDA00022132335000000522
Figure BDA00022132335000000523
所有帧数的总和再取平均,得到
Figure BDA00022132335000000524
Figure BDA00022132335000000525
Figure BDA00022132335000000526
计算公式如下:
Figure BDA00022132335000000527
Figure BDA00022132335000000528
步骤8-4、将
Figure BDA00022132335000000529
Figure BDA00022132335000000530
组成
Figure BDA00022132335000000531
Figure BDA00022132335000000532
Figure BDA00022132335000000533
Figure BDA00022132335000000534
组成
Figure BDA00022132335000000535
在本方案中,所述步骤10的具体步骤为:
步骤10-1、分别计算
Figure BDA0002213233500000061
Figure BDA0002213233500000062
的信息熵均值,分别记为
Figure BDA0002213233500000063
Figure BDA0002213233500000064
Figure BDA0002213233500000065
的计算公式为:
Figure BDA0002213233500000066
其中,(i,j)表示图像的像素点,i=1、2、3…W,j=1、2、3…H,entroy表示计算信息熵;
步骤10-2、分别计算
Figure BDA0002213233500000067
Figure BDA0002213233500000068
十个方向之和再取平均,记为
Figure BDA0002213233500000069
Figure BDA00022132335000000610
Figure BDA00022132335000000611
的计算公式为:
Figure BDA00022132335000000612
步骤10-3、分别计算
Figure BDA00022132335000000613
Figure BDA00022132335000000614
所有帧数之和再取平均,记为EL和ER;EL和ER的计算公式为:
Figure BDA00022132335000000615
步骤10-4、分别计算左视点视频和右视点视频的权重,对应记为wL和wR;wL和wR的计算公式为:
Figure BDA00022132335000000616
与现有技术相比,本发明的优点在于:通过对左视点和右视点失真视频提取三种尺度下虚拟视点视频空域和时域的形态学边缘图像,将不同尺度下边缘图像的水平标准差和垂直标准差作为失真视频的局部特征,能更准确地反映虚拟视点视频特有的边缘纹理失真;且对不同尺度下虚拟视点左右视点失真视频的空域和时域提取自然统计特征,采用局部特征与全局特征相结合的方式能更加准确地预测虚拟视点视频的质量,并根据左视点和右视点在剪切波变换后不同方向上的信息熵分别计算左视点视频和右视点视频的权重,然后将特征按照左视点视频和右视点视频权重融合为立体特征,因此与现有的客观质量评价方法相比,本方法能够更加有效地解决非对称失真虚拟视点立体视频的质量评价问题,更符合人眼主观感知。
附图说明
图1为本发明实施例中无参考非对称虚拟视点立体视频质量评价方法的流程图。
具体实施方式
以下结合附图实施例对本发明作进一步详细描述。
如图1所示,一种无参考非对称虚拟视点立体视频质量评价方法,包括以下步骤:
步骤1、输入左右视点为非对称失真的待评估虚拟视点立体视频,并分别提取待评估虚拟视点立体视频的左视点视频VL和右视点视频VR,其中,左视点视频VL和右视点视频VR的宽度均为W,高度均为H,视频帧的总数均为N;其中,W、H和N均为自然数;
步骤2、分别提取左视点视频VL的亮度通道IL和右视点视频VR的亮度通道IR;其中,左视点视频的亮度通道IL和右视点视频的亮度通道IR中的第n帧图像分别记为
Figure BDA0002213233500000071
Figure BDA0002213233500000072
步骤3、分别对左视点视频的亮度通道IL和右视点视频的亮度通道IR中的每帧图像进行两次下采样,分别得到
Figure BDA0002213233500000073
Figure BDA0002213233500000074
一次下采样之后得到的图像和两次下采样之后得到的图像;并将
Figure BDA0002213233500000075
Figure BDA0002213233500000076
一次下采样之后得到的图像以及
Figure BDA0002213233500000077
两次下采样之后得到的图像对应为左视点视频的亮度通道在3种尺度下对应的图像,将
Figure BDA0002213233500000078
Figure BDA0002213233500000079
一次下采样之后得到的图像以及
Figure BDA00022132335000000710
两次下采样之后得到的图像对应为右视点视频的亮度通道在3种尺度下对应的图像,其中,左视点视频的亮度通道在第s种尺度下对应的第n帧图像记为
Figure BDA00022132335000000711
右视点视频的亮度通道在第s种尺度下对应的第n帧图像记为
Figure BDA00022132335000000712
步骤4、根据步骤3中的
Figure BDA00022132335000000713
Figure BDA00022132335000000714
分别计算得到左、右视点的空域局部特征
Figure BDA00022132335000000715
Figure BDA00022132335000000716
其中,具体步骤为:
步骤4-1、使用大小为M1×M1的圆形结构元分别对
Figure BDA00022132335000000717
Figure BDA00022132335000000718
进行腐蚀;其中,M1为正整数;本实施例中,由于观看者对视频中纹理边缘失真较为敏感,因此,M1=4,使用4×4大小的圆形结构元分别对
Figure BDA00022132335000000719
Figure BDA00022132335000000720
进行腐蚀;
步骤4-2、分别将
Figure BDA00022132335000000721
Figure BDA00022132335000000722
与对应的腐蚀后的每帧图像进行相减,分别得到
Figure BDA00022132335000000723
Figure BDA00022132335000000724
的形态学边缘图像,记为
Figure BDA00022132335000000725
Figure BDA00022132335000000726
步骤4-3、分别计算
Figure BDA00022132335000000727
Figure BDA00022132335000000728
水平方向的标准差的最大值,记为
Figure BDA00022132335000000729
Figure BDA00022132335000000730
h表示水平方向;
步骤4-4、计算
Figure BDA0002213233500000081
Figure BDA0002213233500000082
所有帧数的总和再取平均,记为
Figure BDA0002213233500000083
Figure BDA0002213233500000084
Figure BDA0002213233500000085
步骤4-5、分别计算
Figure BDA0002213233500000086
Figure BDA0002213233500000087
垂直方向的标准差的最大值,记为
Figure BDA0002213233500000088
Figure BDA0002213233500000089
v表示垂直方向;
步骤4-6、计算
Figure BDA00022132335000000810
Figure BDA00022132335000000811
所有帧数的总和再取平均,记为
Figure BDA00022132335000000812
Figure BDA00022132335000000813
Figure BDA00022132335000000814
步骤4-7、将
Figure BDA00022132335000000815
Figure BDA00022132335000000816
组成
Figure BDA00022132335000000817
Figure BDA00022132335000000818
Figure BDA00022132335000000819
组成
Figure BDA00022132335000000820
Figure BDA00022132335000000821
步骤5、分别计算
Figure BDA00022132335000000822
Figure BDA00022132335000000823
的任意相邻两帧图像之间的绝对差值,记为
Figure BDA00022132335000000824
Figure BDA00022132335000000825
Figure BDA00022132335000000826
其中,
Figure BDA00022132335000000827
Figure BDA00022132335000000828
的计算公式为:
Figure BDA00022132335000000829
abs表示取绝对值函数,
Figure BDA00022132335000000830
表示左视点视频的亮度通道在第s种尺度下的第k1帧图像,
Figure BDA00022132335000000831
表示左视点视频的亮度通道在第s种尺度下的第k1-1帧图像,k1=2、3、4…N;
Figure BDA00022132335000000832
表示右视点视频的亮度通道在第s种尺度下的第k1帧图像,
Figure BDA00022132335000000833
表示右视点视频的亮度通道在第s种尺度下的第k1-1帧图像;
步骤6、根据步骤5中的
Figure BDA00022132335000000834
Figure BDA00022132335000000835
分别计算出左、右视点的时域局部特征
Figure BDA00022132335000000836
Figure BDA00022132335000000837
其中,具体步骤为:
步骤6-1、使用大小为M2×M2的圆形结构元分别对
Figure BDA00022132335000000838
Figure BDA00022132335000000839
进行腐蚀;其中,M2为正整数;本实施例中,由于观看者对视频中纹理边缘失真较为敏感,因此,M2=4,使用4×4大小的圆形结构元分别对
Figure BDA00022132335000000840
Figure BDA00022132335000000841
进行腐蚀;
步骤6-2、分别将
Figure BDA00022132335000000842
Figure BDA00022132335000000843
与对应的腐蚀后的每帧图像进行相减,分别得到
Figure BDA00022132335000000844
Figure BDA00022132335000000845
的形态学边缘图像,记为
Figure BDA00022132335000000846
Figure BDA00022132335000000847
步骤6-3、分别计算
Figure BDA00022132335000000848
Figure BDA00022132335000000849
水平方向的标准差的最大值,记为
Figure BDA00022132335000000850
Figure BDA00022132335000000851
步骤6-4、计算
Figure BDA00022132335000000852
Figure BDA00022132335000000853
所有帧数的总和再取平均,记为
Figure BDA00022132335000000854
Figure BDA00022132335000000855
Figure BDA0002213233500000091
步骤6-5、分别计算
Figure BDA0002213233500000092
Figure BDA0002213233500000093
垂直方向的标准差的最大值,记为
Figure BDA0002213233500000094
Figure BDA0002213233500000095
步骤6-6、计算
Figure BDA0002213233500000096
Figure BDA0002213233500000097
所有帧数的总和再取平均,记为
Figure BDA0002213233500000098
Figure BDA0002213233500000099
Figure BDA00022132335000000910
步骤6-7、将
Figure BDA00022132335000000911
Figure BDA00022132335000000912
组成
Figure BDA00022132335000000913
Figure BDA00022132335000000914
Figure BDA00022132335000000915
组成
Figure BDA00022132335000000916
Figure BDA00022132335000000917
步骤7、根据步骤3中的
Figure BDA00022132335000000918
Figure BDA00022132335000000919
分别计算左、右视点的空域全局特征
Figure BDA00022132335000000920
Figure BDA00022132335000000921
其中,具体步骤为:
步骤7-1、分别对
Figure BDA00022132335000000922
Figure BDA00022132335000000923
进行去均值和对比度归一化(MSCN)处理,得到第一MSCN系数,分别记为
Figure BDA00022132335000000924
Figure BDA00022132335000000925
步骤7-2、利用非对称广义高斯分布分别对
Figure BDA00022132335000000926
Figure BDA00022132335000000927
进行直方图拟合,得到其对应的4个拟合参数,分别记为
Figure BDA00022132335000000928
Figure BDA00022132335000000929
其中,
Figure BDA00022132335000000930
分别指
Figure BDA00022132335000000931
的左、右方差参数;
Figure BDA00022132335000000932
分别指
Figure BDA00022132335000000933
的左、右方差参数;
步骤7-3、分别计算
Figure BDA00022132335000000934
Figure BDA00022132335000000935
所有帧数的总和再取平均,得到
Figure BDA00022132335000000936
Figure BDA00022132335000000937
计算公式如下:
Figure BDA00022132335000000938
Figure BDA00022132335000000939
步骤7-4、将
Figure BDA00022132335000000940
Figure BDA00022132335000000941
组成
Figure BDA00022132335000000942
Figure BDA00022132335000000943
Figure BDA00022132335000000944
Figure BDA00022132335000000945
组成
Figure BDA00022132335000000946
步骤8、根据步骤5中的
Figure BDA0002213233500000101
Figure BDA0002213233500000102
分别计算左、右视点的时域全局特征
Figure BDA0002213233500000103
Figure BDA0002213233500000104
其中,具体的步骤为:
步骤8-1、分别对
Figure BDA0002213233500000105
Figure BDA0002213233500000106
进行去均值和对比度归一化(MSCN)处理,得到第二MSCN系数,分别记为
Figure BDA0002213233500000107
Figure BDA0002213233500000108
步骤8-2、利用非对称广义高斯分布分别对
Figure BDA0002213233500000109
Figure BDA00022132335000001010
进行直方图拟合,得到其对应的4个拟合参数,分别记为
Figure BDA00022132335000001011
Figure BDA00022132335000001012
其中,
Figure BDA00022132335000001013
分别指
Figure BDA00022132335000001014
的左、右方差参数;
Figure BDA00022132335000001015
分别指
Figure BDA00022132335000001016
的左、右方差参数;
步骤8-3、分别计算
Figure BDA00022132335000001017
Figure BDA00022132335000001018
所有帧数的总和再取平均,得到
Figure BDA00022132335000001019
Figure BDA00022132335000001020
Figure BDA00022132335000001021
计算公式如下:
Figure BDA00022132335000001022
Figure BDA00022132335000001023
步骤8-4、将
Figure BDA00022132335000001024
Figure BDA00022132335000001025
组成
Figure BDA00022132335000001026
Figure BDA00022132335000001027
Figure BDA00022132335000001028
Figure BDA00022132335000001029
组成
Figure BDA00022132335000001030
步骤9、分别对
Figure BDA00022132335000001031
Figure BDA00022132335000001032
的每帧图像进行非下采样剪切波变换,分别得到一个低频子带图像和四个尺度、十个方向的高频子带图像;分别将
Figure BDA00022132335000001033
Figure BDA00022132335000001034
进行非下采样剪切波变换后得到的第4个尺度下第k个方向的剪切波高频子带图像对应记为
Figure BDA00022132335000001035
Figure BDA00022132335000001036
其中,k=1、2、3...、10;
步骤10、根据步骤9中的
Figure BDA00022132335000001037
Figure BDA00022132335000001038
分别计算出左视点视频的权重wL和右视点视频的权重wR
其中,具体步骤为:
步骤10-1、分别计算
Figure BDA00022132335000001039
Figure BDA00022132335000001040
的信息熵均值,分别记为
Figure BDA00022132335000001041
Figure BDA00022132335000001042
Figure BDA00022132335000001043
的计算公式为:
Figure BDA0002213233500000111
其中,(i,j)表示图像的像素点,i=1、2、3…W,j=1、2、3…H,entroy表示计算信息熵;
步骤10-2、分别计算
Figure BDA0002213233500000112
Figure BDA0002213233500000113
十个方向之和再取平均,记为
Figure BDA0002213233500000114
Figure BDA0002213233500000115
Figure BDA0002213233500000116
的计算公式为:
Figure BDA0002213233500000117
步骤10-3、分别计算
Figure BDA0002213233500000118
Figure BDA0002213233500000119
所有帧数之和再取平均,记为EL和ER;EL和ER的计算公式为:
Figure BDA00022132335000001110
步骤10-4、分别计算左视点视频和右视点视频的权重,对应记为wL和wR;wL和wR的计算公式为:
Figure BDA00022132335000001111
步骤11、将上述得到的左视点空域局部特征、时域局部特征、空域全局特征和时域全局特征组成左视点特征向量FL
Figure BDA00022132335000001112
将上述得到的右视点空域局部特征、时域局部特征、空域全局特征和时域全局特征组成右视点特征向量FR
Figure BDA00022132335000001113
步骤12、将左视点特征向量FL和右视点特征向量FR与步骤10中得到的左视点视频和右视点视频权重wL和wR对应进行融合,得到立体特征向量F,F=wL×FL+wR×FR
步骤13、将立体特征向量F作为输入量,使用随机森林技术,计算得到待评估的虚拟视点立体视频的客观质量评价值。
现有技术中虚拟视点绘制失真为伪影、空洞和扭曲形变等多种几何失真,与遍布整幅图像的编码压缩失真不同,其主要体现在纹理边缘区域,同时考虑到不同大小的视频给观看者带来的主观感知的不同,因此,本发明中通过左视点和右视点失真视频提取三种尺度下虚拟视点视频空域和时域的形态学边缘图像,不同尺度下的边缘图像呈现出不同程度的边缘失真,将不同尺度下边缘图像的水平标准差和垂直标准差作为失真视频的局部特征,能更准确地反映虚拟视点视频特有的边缘纹理失真。
且考虑到仅从局部边缘区域不能准确反映整幅图像质量,因此,本发明中对不同尺度下虚拟视点左右点失真视频的空域和时域提取自然统计特征,采用局部特征与全局特征相结合的方式能更加准确地预测虚拟视点视频的质量。
考虑到观看者在观看自由视点视频时存在左视点和右视点为非对称失真的情况,即左(右)视点为原始视点,右(左)视点为虚拟视点,现有技术中采用左右权重各0.5合成立体视点计算得到的客观质量分数并不符合人眼主观感知,双目竞争机制表明较复杂的纹理细节区域包含更多有用和重要的视觉信息,左视点和右视点应根据其存在的纹理细节信息分配不同的权重。因此,本发明中根据左视点和右视点在剪切波变换后不同方向上的信息熵分别计算左视点视频和右视点视频的权重,然后将特征按照左视点视频和右视点视频权重融合为立体特征,最后利用随机森林训练得到最终的客观质量分数。与现有的客观质量评价方法相比,本方法能够更加有效地解决非对称失真虚拟视点立体视频的质量评价问题。

Claims (5)

1.一种无参考非对称虚拟视点立体视频质量评价方法,其特征在于包括以下步骤:
步骤1、输入左右视点为非对称失真的待评估虚拟视点立体视频,并分别提取待评估虚拟视点立体视频的左视点视频VL和右视点视频VR,其中,左视点视频VL和右视点视频VR的宽度均为W,高度均为H,视频帧的总数均为N;
步骤2、分别提取左视点视频VL的亮度通道IL和右视点视频VR的亮度通道IR;其中,左视点视频的亮度通道IL和右视点视频的亮度通道IR中的第n帧图像分别记为
Figure FDA0002736173710000011
Figure FDA0002736173710000012
步骤3、分别对左视点视频的亮度通道IL和右视点视频的亮度通道IR中的每帧图像进行两次下采样,分别得到
Figure FDA0002736173710000013
Figure FDA0002736173710000014
一次下采样之后得到的图像和两次下采样之后得到的图像;并将
Figure FDA0002736173710000015
Figure FDA0002736173710000016
一次下采样之后得到的图像以及
Figure FDA0002736173710000017
两次下采样之后得到的图像对应为左视点视频的亮度通道在3种尺度下对应的图像,将
Figure FDA0002736173710000018
一次下采样之后得到的图像和
Figure FDA0002736173710000019
两次下采样之后得到的图像对应为右视点视频的亮度通道在3种尺度下对应的图像,其中,左视点视频的亮度通道在第s种尺度下对应的第n帧图像记为
Figure FDA00027361737100000110
右视点视频的亮度通道在第s种尺度下对应的第n帧图像记为
Figure FDA00027361737100000111
步骤4、根据步骤3中的
Figure FDA00027361737100000112
Figure FDA00027361737100000113
分别计算得到左、右视点的空域局部特征
Figure FDA00027361737100000114
Figure FDA00027361737100000115
其中,具体步骤为:
步骤4-1、使用大小为M1×M1的圆形结构元分别对
Figure FDA00027361737100000116
Figure FDA00027361737100000117
进行腐蚀;其中,M1为正整数;
步骤4-2、分别将
Figure FDA00027361737100000118
Figure FDA00027361737100000119
与对应的腐蚀后的每帧图像进行相减,分别得到
Figure FDA00027361737100000120
Figure FDA00027361737100000121
的形态学边缘图像,记为
Figure FDA00027361737100000122
Figure FDA00027361737100000123
步骤4-3、分别计算
Figure FDA00027361737100000124
Figure FDA00027361737100000125
水平方向的标准差的最大值,记为
Figure FDA00027361737100000126
Figure FDA00027361737100000127
h表示水平方向;
步骤4-4、计算
Figure FDA00027361737100000128
Figure FDA00027361737100000129
所有帧数的总和再取平均,记为
Figure FDA00027361737100000130
Figure FDA00027361737100000131
Figure FDA00027361737100000132
步骤4-5、分别计算
Figure FDA0002736173710000021
Figure FDA0002736173710000022
垂直方向的标准差的最大值,记为
Figure FDA0002736173710000023
Figure FDA0002736173710000024
v表示垂直方向;
步骤4-6、计算
Figure FDA0002736173710000025
Figure FDA0002736173710000026
所有帧数的总和再取平均,记为
Figure FDA0002736173710000027
Figure FDA0002736173710000028
Figure FDA0002736173710000029
步骤4-7、将
Figure FDA00027361737100000210
Figure FDA00027361737100000211
组成
Figure FDA00027361737100000212
Figure FDA00027361737100000213
Figure FDA00027361737100000214
组成
Figure FDA00027361737100000215
Figure FDA00027361737100000216
步骤5、分别计算
Figure FDA00027361737100000217
Figure FDA00027361737100000218
的任意相邻两帧图像之间的绝对差值,记为
Figure FDA00027361737100000219
Figure FDA00027361737100000220
m=1、2、3...N-1;其中,
Figure FDA00027361737100000221
Figure FDA00027361737100000222
的计算公式为:
Figure FDA00027361737100000223
abs表示取绝对值函数,
Figure FDA00027361737100000224
表示左视点视频的亮度通道在第s种尺度下的第k1帧图像,
Figure FDA00027361737100000225
表示左视点视频的亮度通道在第s种尺度下的第k1-1帧图像,k1=2、3、4…N;
Figure FDA00027361737100000226
表示右视点视频的亮度通道在第s种尺度下的第k1帧图像,
Figure FDA00027361737100000227
表示右视点视频的亮度通道在第s种尺度下的第k1-1帧图像;
步骤6、根据步骤5中的
Figure FDA00027361737100000228
Figure FDA00027361737100000229
分别计算出左、右视点的时域局部特征
Figure FDA00027361737100000230
Figure FDA00027361737100000231
步骤7、根据步骤3中的
Figure FDA00027361737100000232
Figure FDA00027361737100000233
分别计算左、右视点的空域全局特征
Figure FDA00027361737100000234
Figure FDA00027361737100000235
步骤8、根据步骤5中的
Figure FDA00027361737100000236
Figure FDA00027361737100000237
分别计算左、右视点的时域全局特征
Figure FDA00027361737100000238
Figure FDA00027361737100000239
步骤9、分别对
Figure FDA00027361737100000240
Figure FDA00027361737100000241
的每帧图像进行非下采样剪切波变换,分别得到一个低频子带图像和四个尺度、十个方向的高频子带图像;分别将
Figure FDA00027361737100000242
Figure FDA00027361737100000243
进行非下采样剪切波变换后得到的第4个尺度下第k个方向的剪切波高频子带图像对应记为
Figure FDA00027361737100000244
Figure FDA00027361737100000245
其中,k=1、2、3...、10;
步骤10、根据步骤9中的
Figure FDA00027361737100000246
Figure FDA00027361737100000247
分别计算出左视点视频的权重wL和右视点视频的权重wR
步骤11、将上述得到的左视点空域局部特征、时域局部特征、空域全局特征和时域全局特征组成左视点特征向量FL
Figure FDA0002736173710000031
将上述得到的右视点空域局部特征、时域局部特征、空域全局特征和时域全局特征组成右视点特征向量FR
Figure FDA0002736173710000032
步骤12、将左视点特征向量FL和右视点特征向量FR与步骤10中得到的左视点视频和右视点视频权重wL和wR对应进行融合,得到立体特征向量F,F=wL×FL+wR×FR
步骤13、将立体特征向量F作为输入量,使用随机森林技术,计算得到待评估的虚拟视点立体视频的客观质量评价值。
2.根据权利要求1所述的立体视频质量评价方法,其特征在于:所述步骤6中的具体步骤为:
步骤6-1、使用大小为M2×M2的圆形结构元分别对
Figure FDA0002736173710000033
Figure FDA0002736173710000034
进行腐蚀;其中,M2为正整数;
步骤6-2、分别将
Figure FDA0002736173710000035
Figure FDA0002736173710000036
与对应的腐蚀后的每帧图像进行相减,分别得到
Figure FDA0002736173710000037
Figure FDA0002736173710000038
的形态学边缘图像,记为
Figure FDA0002736173710000039
Figure FDA00027361737100000310
步骤6-3、分别计算
Figure FDA00027361737100000311
Figure FDA00027361737100000312
水平方向的标准差的最大值,记为
Figure FDA00027361737100000313
Figure FDA00027361737100000314
步骤6-4、计算
Figure FDA00027361737100000315
Figure FDA00027361737100000316
所有帧数的总和再取平均,记为
Figure FDA00027361737100000317
Figure FDA00027361737100000318
Figure FDA00027361737100000319
步骤6-5、分别计算
Figure FDA00027361737100000320
Figure FDA00027361737100000321
垂直方向的标准差的最大值,记为
Figure FDA00027361737100000322
Figure FDA00027361737100000323
步骤6-6、计算
Figure FDA00027361737100000324
Figure FDA00027361737100000325
所有帧数的总和再取平均,记为
Figure FDA00027361737100000326
Figure FDA00027361737100000327
Figure FDA00027361737100000328
步骤6-7、将
Figure FDA00027361737100000329
Figure FDA00027361737100000330
组成
Figure FDA00027361737100000331
Figure FDA00027361737100000332
Figure FDA00027361737100000333
组成
Figure FDA00027361737100000334
Figure FDA00027361737100000335
3.根据权利要求1所述的立体视频质量评价方法,其特征在于:所述步骤7的具体步骤为:
步骤7-1、分别对
Figure FDA00027361737100000336
Figure FDA00027361737100000337
进行去均值和对比度归一化(MSCN)处理,得到第一MSCN系数,分别记为
Figure FDA00027361737100000338
Figure FDA00027361737100000339
步骤7-2、利用非对称广义高斯分布分别对
Figure FDA0002736173710000041
Figure FDA0002736173710000042
进行直方图拟合,得到其对应的4个拟合参数,分别记为
Figure FDA0002736173710000043
Figure FDA0002736173710000044
其中,
Figure FDA0002736173710000045
分别指
Figure FDA0002736173710000046
的左、右方差参数;
Figure FDA0002736173710000047
分别指
Figure FDA0002736173710000048
的左、右方差参数;
步骤7-3、分别计算
Figure FDA0002736173710000049
Figure FDA00027361737100000410
所有帧数的总和再取平均,得到
Figure FDA00027361737100000411
Figure FDA00027361737100000412
计算公式如下:
Figure FDA00027361737100000413
Figure FDA00027361737100000414
步骤7-4、将
Figure FDA00027361737100000415
Figure FDA00027361737100000416
组成
Figure FDA00027361737100000417
Figure FDA00027361737100000418
Figure FDA00027361737100000419
Figure FDA00027361737100000420
组成
Figure FDA00027361737100000421
4.根据权利要求1所述的立体视频质量评价方法,其特征在于:所述步骤8的具体步骤为:
步骤8-1、分别对
Figure FDA00027361737100000422
Figure FDA00027361737100000423
进行去均值和对比度归一化(MSCN)处理,得到第二MSCN系数,分别记为
Figure FDA00027361737100000424
Figure FDA00027361737100000425
步骤8-2、利用非对称广义高斯分布分别对
Figure FDA00027361737100000426
Figure FDA00027361737100000427
进行直方图拟合,得到其对应的4个拟合参数,分别记为
Figure FDA00027361737100000428
Figure FDA00027361737100000429
其中,
Figure FDA00027361737100000430
分别指
Figure FDA00027361737100000431
的左、右方差参数;
Figure FDA00027361737100000432
分别指
Figure FDA00027361737100000433
的左、右方差参数;
步骤8-3、分别计算
Figure FDA00027361737100000434
Figure FDA00027361737100000435
所有帧数的总和再取平均,得到
Figure FDA00027361737100000436
Figure FDA00027361737100000437
Figure FDA00027361737100000438
计算公式如下:
Figure FDA00027361737100000439
Figure FDA0002736173710000051
步骤8-4、将
Figure FDA0002736173710000052
Figure FDA0002736173710000053
组成
Figure FDA0002736173710000054
Figure FDA0002736173710000055
Figure FDA0002736173710000056
Figure FDA0002736173710000057
组成
Figure FDA0002736173710000058
5.根据权利要求1所述的立体视频质量评价方法,其特征在于:所述步骤10的具体步骤为:
步骤10-1、分别计算
Figure FDA0002736173710000059
Figure FDA00027361737100000510
的信息熵均值,分别记为
Figure FDA00027361737100000511
Figure FDA00027361737100000512
Figure FDA00027361737100000513
Figure FDA00027361737100000514
的计算公式为:
Figure FDA00027361737100000515
其中,(i,j)表示图像的像素点,i=1、2、3…W,j=1、2、3…H,entroy表示计算信息熵;
步骤10-2、分别计算
Figure FDA00027361737100000516
Figure FDA00027361737100000517
十个方向之和再取平均,记为
Figure FDA00027361737100000518
Figure FDA00027361737100000519
Figure FDA00027361737100000520
Figure FDA00027361737100000521
的计算公式为:
Figure FDA00027361737100000522
步骤10-3、分别计算
Figure FDA00027361737100000523
Figure FDA00027361737100000524
所有帧数之和再取平均,记为EL和ER;EL和ER的计算公式为:
Figure FDA00027361737100000525
步骤10-4、分别计算左视点视频和右视点视频的权重,对应记为wL和wR;wL和wR的计算公式为:
Figure FDA00027361737100000526
CN201910905772.2A 2019-09-24 2019-09-24 一种无参考非对称虚拟视点立体视频质量评价方法 Active CN110636282B (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910905772.2A CN110636282B (zh) 2019-09-24 2019-09-24 一种无参考非对称虚拟视点立体视频质量评价方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910905772.2A CN110636282B (zh) 2019-09-24 2019-09-24 一种无参考非对称虚拟视点立体视频质量评价方法

Publications (2)

Publication Number Publication Date
CN110636282A CN110636282A (zh) 2019-12-31
CN110636282B true CN110636282B (zh) 2021-04-09

Family

ID=68974004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910905772.2A Active CN110636282B (zh) 2019-09-24 2019-09-24 一种无参考非对称虚拟视点立体视频质量评价方法

Country Status (1)

Country Link
CN (1) CN110636282B (zh)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853175B (zh) * 2015-04-24 2017-05-03 张艳 一种新的合成虚拟视点客观质量评价方法
CN106341677B (zh) * 2015-07-07 2018-04-20 中国科学院深圳先进技术研究院 虚拟视点视频质量评价方法
FR3051572A1 (fr) * 2016-05-20 2017-11-24 Romain Streichemberger Methode de lecture interactive de contenus preenregistres en realite virtuelle
CN106210711B (zh) * 2016-08-05 2017-10-31 宁波大学 一种无参考立体图像质量评价方法
CN109801257A (zh) * 2018-12-17 2019-05-24 天津大学 无参考dibr生成图像质量评价方法

Also Published As

Publication number Publication date
CN110636282A (zh) 2019-12-31

Similar Documents

Publication Publication Date Title
JP6027034B2 (ja) 立体映像エラー改善方法及び装置
Ryu et al. No-reference quality assessment for stereoscopic images based on binocular quality perception
Akhter et al. No-reference stereoscopic image quality assessment
Boev et al. Towards compound stereo-video quality metric: a specific encoder-based framework
Choi et al. Visual fatigue evaluation and enhancement for 2D-plus-depth video
US20110032341A1 (en) Method and system to transform stereo content
CN102186023B (zh) 一种双目立体字幕处理方法
JP2013527646A5 (zh)
CN103986925B (zh) 基于亮度补偿的立体视频视觉舒适度评价方法
Bosc et al. A quality assessment protocol for free-viewpoint video sequences synthesized from decompressed depth data
CN109510981B (zh) 一种基于多尺度dct变换的立体图像舒适度预测方法
CN114648482A (zh) 立体全景图像的质量评价方法、***
CN104853175B (zh) 一种新的合成虚拟视点客观质量评价方法
CN103905812A (zh) 一种纹理/深度联合上采样方法
Liu et al. An enhanced depth map based rendering method with directional depth filter and image inpainting
Qiao et al. Color correction and depth-based hierarchical hole filling in free viewpoint generation
US9787980B2 (en) Auxiliary information map upsampling
CN110636282B (zh) 一种无参考非对称虚拟视点立体视频质量评价方法
CN104052990B (zh) 一种基于融合深度线索的全自动二维转三维方法和装置
Kim et al. Measurement of critical temporal inconsistency for quality assessment of synthesized video
CN105208369A (zh) 一种立体图像视觉舒适度增强方法
Ruijters et al. IGLANCE: transmission to medical high definition autostereoscopic displays
KR20140113066A (ko) 차폐 영역 정보를 기반으로 하는 다시점 영상 생성 방법 및 장치
Zhang et al. A SVR based quality metric for depth quality assessment
Kim et al. Effects of depth map quantization for computer-generated multiview images using depth image-based rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230629

Address after: 710086 No. 2196, Fengdong Avenue (East Section), Fengdong New Town, Xixian New District, Xi'an City, Shaanxi Province, China Free Trade Xintiandi Chuangxing Community C0362

Patentee after: Xi'an Lingjing Chenxing Culture Technology Co.,Ltd.

Address before: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee before: Dragon totem Technology (Hefei) Co.,Ltd.

Effective date of registration: 20230629

Address after: 230000 floor 1, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Dragon totem Technology (Hefei) Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

TR01 Transfer of patent right