CN104506852B - A kind of objective quality assessment method towards video conference coding - Google Patents

A kind of objective quality assessment method towards video conference coding Download PDF

Info

Publication number
CN104506852B
CN104506852B CN201410826849.4A CN201410826849A CN104506852B CN 104506852 B CN104506852 B CN 104506852B CN 201410826849 A CN201410826849 A CN 201410826849A CN 104506852 B CN104506852 B CN 104506852B
Authority
CN
China
Prior art keywords
face
eye
area
mouth
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410826849.4A
Other languages
Chinese (zh)
Other versions
CN104506852A (en
Inventor
徐迈
马源
张京泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410826849.4A priority Critical patent/CN104506852B/en
Publication of CN104506852A publication Critical patent/CN104506852A/en
Application granted granted Critical
Publication of CN104506852B publication Critical patent/CN104506852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a kind of objective quality assessment method towards video conference coding, including training and assessment two parts;Training department divides and includes step one: face and face area extract;Step 2: obtain the concerned degree of single pixel;Step 3: face area is calibrated and normalizes;Step 4: obtain gauss hybrid models;Evaluation part includes step one: for one group of video, automatically extract out number of pixels in background, face, left eye, right eye, mouth, nasal area;Step 2: face area is calibrated and normalizes;Step 3: obtain weight collection of illustrative plates;Step 4: calculate Y-PSNR based on gauss hybrid models, the picture quality after assessment video conferencing system coding.Present invention, avoiding conventional method and do not take into account the deficiency of video content, the precision of image quality measure can be promoted so that it is more reflect the result of subjective quality assessment by giving video image face more weight.

Description

A kind of objective quality assessment method towards video conference coding
Technical field
The present invention relates to a kind of objective quality assessment method towards video conference coding, belong to the sense of video conference coding Know visual quality assessment technology field.
Background technology
When assessing the efficiency of different Video coding modes, the index of visual quality is requisite.Perception video The visual quality of coding is assessed and can be divided into two classes: subjective evaluation and objective evaluation.Owing to the mankind are the most straight when watching video The recipient connect, subjective visual quality assessment is the most accurate in the method assessing Video coding, most reliable.But its inefficient and High cost promotes the development of the evaluation index of objective visual quality.The purpose of objective evaluation is to improve itself and subjective visual quality Correlation, to accurately measure visual quality.Most widely used objective indicator includes peak signal noise ratio (peak Signal-to-noise ratio, PSNR), structural similarity (structural similarity, SSIM), visual signal is made an uproar Acoustic ratio (visual signal-to-noise ratio, VSNR), visual quality metrics (video quality metrics, VQM) and based drive integrity of video evaluation (MOtion-based Video Integrity Evaluation, MOVIE)。
The video conference of perception Video coding has been widely studied, because face is one for video conference ROI (Region-of-Interest, area-of-interest).But, there are currently no the objective vision exclusively for video conference exploitation Method for evaluating quality.
Summary of the invention
The invention aims to solve the deficiency of the objective evaluation method of existing video quality, it is provided that Yi Zhongzhen Objective indicator to video conference coding, it is intended to improve the correlation between the subjective perceptual quality of beholder.
A kind of objective quality assessment method towards video conference coding, including training and assessment two parts;
Training department divides and includes following step:
Step one: face and face area extract;
Step 2: carry out eye tracker experiment, obtains the focus coordinate for each two field picture when tester watches video Position, obtains the concerned degree of single pixel;
Step 3: face area is calibrated and normalizes;
Step 4: obtain gauss hybrid models;
Evaluation part includes following step:
Step one: for one group of video, repetition training part steps one, automatically extract out background, face, left eye, right eye, Number of pixels in mouth, nasal area;
Step 2: the step 3 of repetition training process, calibrates face area and normalizes;
Step 3: obtain on the basis of gauss hybrid models in the training stage, calculate right eye, left eye, mouth, nose, face its Gaussian Profile weight around his region, the weight of background area and above each region, obtains weight collection of illustrative plates;
Step 4: on the basis of weight collection of illustrative plates, calculates Y-PSNR based on gauss hybrid models, assesses video conference Picture quality after system coding.
It is an advantage of the current invention that:
(1) the present invention is directed to the image quality measure method after video conferencing system coding, it is to avoid conventional method is not examined Consider the deficiency to video content, the precision of image quality measure can be promoted by giving video image face more weight, make It more reflects the result of subjective quality assessment;
(2) present invention is on the basis of each region of face (such as nose, face) extracts, and some key areas for face are composed Give bigger weight, thus meet current and future video conference system resolution ratio improves constantly, display size is increasing Development trend;
(3) present invention is by introducing the experimental data of eye tracker, in conjunction with the calculating instrument of statistical learning, can excavate at video The rule of people's visual attention during meeting, is applied to the image quality measure after video conferencing system coding, greatly further Width improves the degree of correlation of itself and subjective quality assessment.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of the present invention;
Fig. 2 face feature automatic Calibration algorithm;
Automatically extracting of Fig. 3 face key area;
Fig. 4 calibration and normalized method;
The method for drafting of Fig. 5 weight collection of illustrative plates;
The calculating signal of Fig. 6 GMM-PSNR.
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described in further detail.
The method that present invention employs real time facial characteristic automatic Calibration, to follow the tracks of the key feature points of face.At face After detection, by combining this locality detection (texture information) and global optimization (face structure), the distributed mode of the point of key feature Type (PDM) generates in frame of video.In the present invention, face and the profile of face that 66 PDM extract are utilized.The PDM of 66 Can carry out the sampling of key point of face and facial characteristics well, and therefore these points can connect accurately to extract Face and the profile of face feature and region.Therefore, in the method that the PDM of 66 is used in, face and face's key with extraction are special Levy.Finally, face and face's key area are according to their contours extract.
In the video of conversational class scene, being found through experiments, in video, face content can attract the biggest portion of observer Divide concern power.Therefore, according to the difference of the concern power of observer, quantify the unequal importance of background, face and face further, Thus promote the objective quality assessment accuracy of video conference.In order to obtain the value of such unequal importance, in meeting The experiment of some eye tracker has been carried out on relevant video.
In an experiment, using eye tracker to have recorded observer when watching video, fall the eye gaze point in frame of video.Eye The focus of eyeball blinkpunkt representative observer, therefore, the result of eye tracking can be used to produce subjective concern model.Eye After dynamic instrument experiment, the number of the eye gaze point belonging to right eye, left eye, mouth, nose, other regions of face, and background is recorded. According to the number of eyeball fixes point fallen in zones of different, introduce a new concept, eye gaze point/pixel (EFP/P), with It is reflected in the pixel level of these region attention rates.Here, there is following EFP/P value.
After obtaining the result of above-mentioned eye tracker experiment, use it to train GMM, to produce the importance power of each frame of video Multigraph is composed.Therefore, GMM-PSNR can calculate by combining corresponding weight collection of illustrative plates.Before training GMM, pre-process The eye gaze point obtained at upper joint with calibration and normalization.Subsequently, in calibration and normalized eyeball fixes point, with expectation Maximize (EM) algorithm and train GMM.GMM can by run several times EM iteration until convergence and obtain.In view of the GMM obtained Parameter, weight collection of illustrative plates can be calculated, set up objective metric G MM-PSNR.
The present invention is a kind of objective quality assessment method towards video conference coding, and flow process is as it is shown in figure 1, include training With assessment two parts;
Training department divides and includes following step:
Step one: face and face area extract;
Face feature automatic Calibration algorithm is utilized to automatically extract out background, face, a left side in given video conference sequence Number of pixels in eye, right eye, mouth, nasal area.
Particularly as follows: first, obtain in video conference sequence in each two field picture by face feature automatic Calibration algorithm Face area key point, second, utilize mean value drift technology on the face area extracted, Local Search face area figure Left eye in Xiang, right eye, mouth, nasal area key point, and by these key points and the key point distributed model in database (PDM) mate, it is achieved left eye, right eye, mouth, nasal area key point optimize, the 3rd, optimized after each two field picture In face, left eye, right eye, mouth, nasal area key point, as in figure 2 it is shown, there are 66 key points, the 4th, respectively by face Portion, left eye, right eye, mouth, the key point of nasal area are connected, and obtain face, left eye, right eye, mouth, nose profile, such as Fig. 3 institute Show, the 5th, obtain number of pixels in face, left eye, right eye, mouth, nasal area respectively, image pixel number is deducted face's picture Element number, obtains background pixel number, finally realizes automatically extracting of face's key area.
Wherein, points distribution models is to use mean value drift technology, by the training to one group of standard testing image.
Wherein it is possible to the face extracted in different facial image, left eye, right eye, mouth, nasal area key point.
Step 2: carry out eye tracker experiment, obtains the focus coordinate for each two field picture when tester watches video Position, obtains the concerned degree of single pixel;
Pay close attention to if the concerned degree of single region (left eye, right eye, mouth, nose, other regions of face, background) is eyes Count out/this area pixel number (efp/p):
c r = f r / p r c l = f l / p l c m = f m / p m c n = f n / p n c o = f o / p o c b = f b / p b
Wherein: cr、cl、cm、cn、co、cbRepresent right eye, left eye, mouth, nose, face other regions, background area respectively The degree of concern of single pixel, fr、fl、fm、fn、fo、fbBeing illustrated respectively in eye tracker experiment, tester falls on right eye, a left side Eye, mouth, nose, other regions of face, the eyes focus number of background area, pr、pl、pm、pn、po、pbRepresent right eye, a left side respectively Pixel number in eye, mouth, nose, other regions of face, background area;
Step 3: face area is calibrated and normalizes;
Calibration can avoid the uncertainty that face diverse location in the picture is caused, and normalized method can make Obtain the present invention and be adapted to the situation that in video conference, human face region number of pixels does not waits.
Method particularly includes:
As shown in Fig. 4 (a), randomly select a two field picture, use leftmost side point in image face area key point, as school Quasi-original point B, obtains in other images point A in the leftmost side in face area key point, obtains coordinate transformation relation between A, B, will In other images, focus is changed according to coordinate transformation relation, completes calibration.
As shown in Fig. 4 (b), randomly selecting a two field picture, in employing image, the abscissa length of personage's right eye is (in 66 o'clock Right eye on the right side of point and the point on the left of right eye between distance) as normalization unit, by the focus root in other images It is normalized according to normalization unit.
Step 4: obtain gauss hybrid models;
Assume that eye gaze point obeys gauss hybrid models, on the basis of normalization with calibration eye tracker data, pass through The linear superposition that gauss hybrid models is write as Gaussian component is as follows:
p ( x * ) = Σ k = 1 K π k ℵ k ( x * )
ℵ k ( x * ) = 1 2 π · 1 | Σ k | 1 2 · exp { - 1 2 ( x * - μ k ) T · Σ k - 1 · ( x * - μ k ) }
Wherein:Represent a Gaussian component, πkkAnd ΣkThe mixed coefficint of kth Gaussian component, average and Variance, and x*Represent the eye gaze point after two-dimensional calibration and normalization.K represents the quantity of the Gaussian component of GMM.Due to nose The quantity of eye gaze point than eyes and the much less of mouth, here the number K of Gaussian component is set to 3, each of which Corresponding to right eye, left eye and mouth.Meanwhile, by μkIt is set to the normalization barycenter of each face feature.
Above-mentioned steps in off-line case, for one group of training video, is tested and data analysis by design eye tracker, Obtain the gauss hybrid models for assessing video conferencing system objective quality.
(2) evaluation part includes following step
Step one: the step one with training process is identical, automatically extracts out background, face, left eye, right eye, mouth, nose region Territory.
Step 2: the step 3 with training process is identical, calibrates and normalize the face area of video.Particular content is shown in figure 4。
Step 3: obtain on the basis of gauss hybrid models in the training stage, calculate right eye, left eye, mouth, nose, face its Gaussian Profile weight around his region, the weight of background area and above each region, particular content is shown in Fig. 5.
Fig. 5 is the method for drafting of weight collection of illustrative plates.In this example is implemented, weight collection of illustrative plates quantitation video conference system can be passed through Middle face and the importance of each pixel of background.The two field picture that input is video conference of this example.First, according to the side of Fig. 3 Method automatically extracts out face's key area with face.Secondly, the key point in video is calibrated according to the method for Fig. 4 And normalization.Finally, use Fig. 1 training department step by step two, four GMM train the parameter that obtains, according to each pixel affiliated area (mainly have powerful connections, face, left eye, right eye, nose, face), calculated the weight of each pixel by formula below, and export should The weight collection of illustrative plates of video conference image, sets the image each pixel importance when quality evaluation by weight size.
Wherein,
g ( x ) = max k π k ℵ k ( x ) Σ x ∈ others max k π k ℵ k ( x ) · p o
The present invention is not limited to adopt the weight size setting image pixel in this way.
Step 4: on the basis of weight collection of illustrative plates, calculates Y-PSNR (GMM-PSNR) based on gauss hybrid models, comments Estimate the picture quality after video conferencing system coding.Particular content is shown in Fig. 4.
Fig. 6 is the calculating signal of GMM-PSNR.In this example is implemented, the calculating exportable measurement video council of GMM-PSNR The GMM-PSNR of conference system encoded images quality.First, with tradition balancing method (such as PSNR), by calculating original regarding Frequently image and the root-mean-square error of video image to be assessed, obtain the residual error of image before and after encoding.Then, by root-mean-square error with The weighting of weight collection of illustrative plates is multiplied, and i.e. can get the value of GMM-MSE.Finally, by the method taken the logarithm, GMM-PSNR is calculated. Circular and computing formula thereof are in the explanation of Fig. 1.The present invention is not limited to the improvement to traditional PS NR.Also can be to it He is carried out by being multiplied with the weighting in weight collection of illustrative plates by balancing method (such as SSIM=Structural SIMilarity, SSIM) Improve.
Specific formula for calculation is as follows:
MSE GMM = Σ i = 1 M Σ j = 1 N ( ω x · ( I x ′ - I x ) ) 2 Σ i = 1 M Σ j = 1 N ω x 2
PSNR GMM = 10 · log ( 2 n - 1 ) 2 MSE GMM
Wherein I 'xAnd IxIt is to process the value of pixel x in video and original video frame respectively.M and N is along Vertical Square respectively To the pixel count with horizontal direction.N (=8) is bit depth.
Finally, Y-PSNR (GMM-based on gauss hybrid models after the present invention exportable video conferencing system coding PSNR), the reduction situation of picture quality before and after Video coding of weighing it is used for.Identical with conventional peak signal to noise ratio (PSNR), GMM- The unit of measurement of PSNR is dB.But, due to people when watching video for image in the attention rate in each region different, GMM- PSNR gives, for the face area that importance in video conferencing system does not waits, the weight that varies in size, thus be substantially improved its with The degree of correlation of subjective quality assessment.
The present invention can provide a kind of more efficiently evaluating method for the quality of transmission of video in video conference.Pass through Test, relative to traditional Objective Video appraisal procedure, for VQM, MOVIE, PSNR, GMM-PSNR significantly improves with main See testing standard, such as MOS, DMOS, between correlation, illustrate that GMM-PSNR can be as one more effectively towards video The objective metric of meeting coding.This is all highly beneficial for Video processing, compression and the video communication of video conference.It Can monitor the performance of video system, and provide regulation codec or the feedback of channel parameter, it is ensured that video quality is can In the range of acceptance.Video quality assessment standard can be used for the design to codec performance, evaluates and optimizes.It also may be used The Digital Video System of visual model is met for design and optimization.
The present invention relates to the objective quality assessment method of video sequence, the perception visual quality for video conference coding is commented Estimate.Present invention employs eye tracker experiment and face and the real-time technique of face feature extraction.In an experiment, background, face The attention rate of various piece is determined based on observer with the importance in face feature region.Utilize the eye that eye tracker collects Eyeball fixation point, and assume that it is distributed as gauss hybrid models, can generate a weights of importance collection of illustrative plates, thus observable person for The attention rate in each region in TV news.The weight collection of illustrative plates produced according to this, can distribute to each pixel in frame of video Different weights, thus improve existing Objective Quality Assessment method.More particularly it relates to an based on existing The perceived video quality assessment of the video conference coding of video quality evaluation method.

Claims (3)

1. towards an objective quality assessment method for video conference coding, including training and assessment two parts;
Training department divides and includes following step:
Step one: face and face area extract;
Face feature automatic Calibration algorithm is utilized to automatically extract out background, face, left eye, the right side in given video conference sequence Number of pixels in eye, mouth, nasal area;
Step 2: carry out eye tracker experiment, obtains the key point coordinate position for each two field picture when tester watches video, Obtain the concerned degree of single pixel;
If the concerned degree in single region is eyes key point number/this area pixel number efp/p, the most single region is Left eye, right eye, mouth, nose, other regions of face or background, then:
c r = f r / p r c l = f l / p l c m = f m / p m c n = f n / p n c o = f o / p o c b = f b / p b
Wherein: cr、cl、cm、cn、co、cbRepresent respectively right eye, left eye, mouth, nose, other regions of face, background area single The degree of concern of pixel, fr、fl、fm、fn、fo、fbBe illustrated respectively in eye tracker experiment in, tester fall right eye, left eye, Mouth, nose, other regions of face, the eyes key point number of background area, pr、pl、pm、pn、po、pbRepresent respectively right eye, left eye, Pixel number in mouth, nose, other regions of face, background area;
Step 3: face area is calibrated and normalizes;
Method particularly includes:
Randomly select a two field picture, use leftmost side point in image face area key point, as calibration original point B, obtain it Point A in the leftmost side in face area key point in his image, obtains coordinate transformation relation between A, B, by key point in other images Change according to coordinate transformation relation, complete calibration;
Randomly selecting a two field picture, in employing image, the abscissa length of personage's right eye is as normalization unit, by other images In key point be normalized according to normalization unit;
Step 4: obtain gauss hybrid models;
Assume that eye gaze point obeys gauss hybrid models, on the basis of normalization with calibration eye tracker data, pass through Gauss The linear superposition that mixed model is write as Gaussian component is as follows:
Wherein:Represent a Gaussian component, πkkAnd ΣkIt is the mixed coefficint of kth Gaussian component, average and variance, And x*Represent the eye gaze point after two-dimensional calibration and normalization;K represents the quantity of the Gaussian component of GMM;
Above-mentioned steps 1-4 in off-line case, for one group of training video, it is thus achieved that be used for assessing video conferencing system objective quality Gauss hybrid models;
Evaluation part includes following step:
Step one: for one group of video, repetition training part steps one, automatically extract out background, face, left eye, right eye, mouth, Number of pixels in nasal area;
Step 2: the step 3 of repetition training process, calibrates face area and normalizes;
Step 3: on the basis of the training stage obtains gauss hybrid models, calculate right eye, left eye, mouth, nose, other districts of face Gaussian Profile weight around territory, the weight of background area and above each region, obtains weight collection of illustrative plates;
Step 4: on the basis of weight collection of illustrative plates, calculates Y-PSNR based on gauss hybrid models, assesses video conferencing system Picture quality after coding.
A kind of objective quality assessment method towards video conference coding the most according to claim 1, described training department Point step one particularly as follows:
The first, the face area obtained in video conference sequence in each two field picture by face feature automatic Calibration algorithm is crucial Point;
The second, utilize mean value drift technology on the face area extracted, left eye in Local Search face area image, Right eye, mouth, nasal area key point, and these key points are mated with the key point distributed model in database, it is achieved Left eye, right eye, mouth, nasal area key point optimize;
3rd, the face in each two field picture after being optimized, left eye, right eye, mouth, nasal area key point;
4th, respectively face, left eye, right eye, mouth, the key point of nasal area are connected, obtain face, left eye, right eye, mouth, Nose profile;
5th, obtain number of pixels in face, left eye, right eye, mouth, nasal area respectively, image pixel number is deducted face Number of pixels, obtains background pixel number, finally realizes automatically extracting of face's key area.
A kind of objective quality assessment method towards video conference coding the most according to claim 1, described training department In the step 4 divided, if K=3, correspond respectively to right eye, left eye and mouth;If μkNormalization barycenter for each face feature.
CN201410826849.4A 2014-12-25 2014-12-25 A kind of objective quality assessment method towards video conference coding Active CN104506852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410826849.4A CN104506852B (en) 2014-12-25 2014-12-25 A kind of objective quality assessment method towards video conference coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410826849.4A CN104506852B (en) 2014-12-25 2014-12-25 A kind of objective quality assessment method towards video conference coding

Publications (2)

Publication Number Publication Date
CN104506852A CN104506852A (en) 2015-04-08
CN104506852B true CN104506852B (en) 2016-08-24

Family

ID=52948564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410826849.4A Active CN104506852B (en) 2014-12-25 2014-12-25 A kind of objective quality assessment method towards video conference coding

Country Status (1)

Country Link
CN (1) CN104506852B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860858B2 (en) * 2018-06-15 2020-12-08 Adobe Inc. Utilizing a trained multi-modal combination model for content and text-based evaluation and distribution of digital video content to client devices
CN109376645B (en) * 2018-10-18 2021-03-26 深圳英飞拓科技股份有限公司 Face image data optimization method and device and terminal equipment
CN110365966B (en) * 2019-06-11 2020-07-28 北京航空航天大学 Video quality evaluation method and device based on window
CN113506260B (en) * 2021-07-05 2023-08-29 贝壳找房(北京)科技有限公司 Face image quality assessment method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170552A (en) * 2010-02-25 2011-08-31 株式会社理光 Video conference system and processing method used therein
CN102984540A (en) * 2012-12-07 2013-03-20 浙江大学 Video quality assessment method estimated on basis of macroblock domain distortion degree
WO2013056123A2 (en) * 2011-10-14 2013-04-18 T-Mobile USA, Inc Quality of user experience testing for video transmissions
CN104243994A (en) * 2014-09-26 2014-12-24 厦门亿联网络技术股份有限公司 Method for real-time motion sensing of image enhancement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8279259B2 (en) * 2009-09-24 2012-10-02 Microsoft Corporation Mimicking human visual system in detecting blockiness artifacts in compressed video streams

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102170552A (en) * 2010-02-25 2011-08-31 株式会社理光 Video conference system and processing method used therein
WO2013056123A2 (en) * 2011-10-14 2013-04-18 T-Mobile USA, Inc Quality of user experience testing for video transmissions
CN102984540A (en) * 2012-12-07 2013-03-20 浙江大学 Video quality assessment method estimated on basis of macroblock domain distortion degree
CN104243994A (en) * 2014-09-26 2014-12-24 厦门亿联网络技术股份有限公司 Method for real-time motion sensing of image enhancement

Also Published As

Publication number Publication date
CN104506852A (en) 2015-04-08

Similar Documents

Publication Publication Date Title
CN109815907B (en) Sit-up posture detection and guidance method based on computer vision technology
CN107027023B (en) Based on the VoIP of neural network without reference video communication quality method for objectively evaluating
CN102421007B (en) Image quality evaluating method based on multi-scale structure similarity weighted aggregate
CN101976444B (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
CN104506852B (en) A kind of objective quality assessment method towards video conference coding
CN107396095B (en) A kind of no reference three-dimensional image quality evaluation method
CN107563995A (en) A kind of confrontation network method of more arbiter error-duration models
CN105160678A (en) Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN101950422B (en) Singular value decomposition(SVD)-based image quality evaluation method
US20100208078A1 (en) Horizontal gaze estimation for video conferencing
CN110490158B (en) Robust face alignment method based on multistage model
CN110991281A (en) Dynamic face recognition method
CN105338343A (en) No-reference stereo image quality evaluation method based on binocular perception
CN104867138A (en) Principal component analysis (PCA) and genetic algorithm (GA)-extreme learning machine (ELM)-based three-dimensional image quality objective evaluation method
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN104202594B (en) A kind of method for evaluating video quality based on 3 D wavelet transformation
WO2021068781A1 (en) Fatigue state identification method, apparatus and device
CN109872305A (en) It is a kind of based on Quality Map generate network without reference stereo image quality evaluation method
CN104811691A (en) Stereoscopic video quality objective evaluation method based on wavelet transformation
CN102663747A (en) Stereo image objectivity quality evaluation method based on visual perception
CN106447695A (en) Same object determining method and device in multi-object tracking
CN104866864A (en) Extreme learning machine for three-dimensional image quality objective evaluation
CN106993188A (en) A kind of HEVC compaction coding methods based on plurality of human faces saliency
CN103745466A (en) Image quality evaluation method based on independent component analysis
CN108259893B (en) Virtual reality video quality evaluation method based on double-current convolutional neural network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant