CN105915987A - Implicit interaction method facing smart television set - Google Patents

Implicit interaction method facing smart television set Download PDF

Info

Publication number
CN105915987A
CN105915987A CN201610237422.XA CN201610237422A CN105915987A CN 105915987 A CN105915987 A CN 105915987A CN 201610237422 A CN201610237422 A CN 201610237422A CN 105915987 A CN105915987 A CN 105915987A
Authority
CN
China
Prior art keywords
gesture
user
interactive
dynamic
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610237422.XA
Other languages
Chinese (zh)
Other versions
CN105915987B (en
Inventor
冯志全
徐治鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201610237422.XA priority Critical patent/CN105915987B/en
Publication of CN105915987A publication Critical patent/CN105915987A/en
Application granted granted Critical
Publication of CN105915987B publication Critical patent/CN105915987B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42201Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42202Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] environmental sensors, e.g. for detecting temperature, luminosity, pressure, earthquakes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Emergency Management (AREA)
  • Health & Medical Sciences (AREA)
  • Ecology (AREA)
  • Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Environmental Sciences (AREA)
  • Remote Sensing (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention, which belongs to the intelligent electric apparatus field, provides an implicit interaction method facing a smart television set. The method comprises: user posture behavior information is obtained in real time, a user position is detected, and a user gesture motion is detected and identified; function state information of a smart television set is detected and low-level explicit interaction information is obtained; the processed user posture behavior information and the real-time function state information of the smart television set are combined, a multi-level dynamic context inference model based on the user behavior and the smart television state is established, and high-level implicit interaction information is obtained; and implicit interaction information visualization is carried out, the gesture motion completed by the user under guidance of the visual implicit information is identified, and an implicit interaction behavior model based on explicit and implicit information fusion is established, so that the interaction task is completed.

Description

A kind of implicit interactions method towards intelligent television
Technical field
The invention belongs to intelligent electric appliance field, be specifically related to a kind of implicit interactions method towards intelligent television.
Background technology
With the development of human-computer interaction technology, the gesture interaction of view-based access control model is the most prominent at the importance of field of human-computer interaction.With Traditional WIMP interactive mode is compared, and the gesture interaction of view-based access control model has broken away from mouse, the keyboard constraint to user, and can be User provides bigger interactive space, more life-like interactive experience.The gesture interaction of view-based access control model has been widely used in void The fields such as plan assembling, virtual augmented reality, somatic sensation television game, robot control, intelligent television is mutual.Hand in intelligent television gesture Mutually in system, the gesture interaction of view-based access control model helps user to break away from the constraint to remote controller, and in remote operated mode is User operation intelligent television provides a kind of naturally interactive mode.In intelligent television interaction scenarios, owing to function is various, complicated, The combination needing substantial amounts of simple gesture order and simple gesture order just can complete operation.Substantial amounts of gesture command adds use The memory burden at family, brings great cognitive load to user;Meanwhile, discrimination present in the gesture interaction of view-based access control model, Midas touch, complicated gesture motion order problem, limit the accuracy of user operation, and same cause the user is greatly Operational load.
Problem present in gesture interaction for view-based access control model, Wu Huiyue (refer to: Wu Huiyue, Zhang Fengjun, Liu Yujin, Deng. the intelligent sketching key technology research [J] of view-based access control model. Chinese journal of computers, 2009,32 (10): 2030-2041) from cognition Psychological Angle, is divided into selectivity process, distributivity to process, focus on 3 stages, in conjunction with the mankind by gesture interaction process Attention model in conscious information processing proposes one based on contactless visual gesture state transition model;By imitating Human visual system to the identifying processing of destination object mechanism, makes system possess and can process the ability of critical information by selectivity, Effectively prevent Midas Touch problem.Liang Zhuorui (refer to: Liang Zhuorui, Xu Xiangmin. towards mutual the reflecting of visual gesture Penetrate relation self-adaptative adjustment [J]. South China Science & Engineering University's journal: natural science edition, 2014,42 (8): 52-57) propose one Mapping relations self-adapting regulation method based on user operation feature, uses based on Borg ' s CR-10 yardstick psychological response experiment test The perception that family hand moves;The method is according to the hand situation of movement in interaction, right after the operation of each continuous interactive terminates Mapping equation is adjusted, and the operation making user complete full screen in the opereating specification that own physical limits covers, and by fall The probability that low user's hand moves is to improve Consumer's Experience.Wang Xiying (refer to: Wang Xiying, Zhang Xiwen, Dai Guozhong. a kind of Deformation gesture tracking method [J] towards real-time, interactive. Journal of Software, 2007,18 (10): 2423-2433) a kind of novelty is proposed Deformation gesture method for real time tracking, utilize one group of 2D gesture model to substitute the 3D gesture model of high latitude, by image Finger and finger tip location, combine K-means clustering algorithm with particle filter, it is achieved to deformation gesture fast and accurately Follow the tracks of continuously, meet the requirement of real-time.But the method is higher to the segmentation prescription of images of gestures, have impact on gesture Mutual robustness.Wei-Po Lee (refer to Lee W P, Che K, Huang J Y.A smart TV system with body-gesture control,tag-based rating and context-aware recommendation[J]. Knowledge-Based Systems, 2014,56 (3): 167-178) utilize kinect body-sensing camera to achieve nature gesture Control the interactive mode of intelligent television, create a kind of commending system based on the situation contextual information residing for social labelling and user, Recommend to be best suitable for the service content of users ' individualized requirement for user.This method has incorporated user and has used in the situation of intelligent television Context information content recommendation service, alleviates cognition and the operation burden of user to a certain extent, but does not accounts for user originally The impact that figure's behavior contextual information of body is mutual on alleviating user.Vatavu (refer to: Vatavu R D.User-defined gestures for free-hand TV control[C]//Proceedings of the 10th European conference On Interactive tv and video.ACM, 2012:45-48) the User Defined gesture that proposes controls the mutual of TV In system, complete user's gesture motion preference during TV basic operation task by research user, observe user behavior, set up Optimum mapping relation between user's gesture and TV functions, it is thus achieved that complete the optimal gesture operation mode of a certain TV operation task, But user still needs to remember substantial amounts of gesture motion realizes TV operation, and the cognitive load of user is bigger.Tian Feng (refer to: Tian Feng, Deng Changzhi, Zhou Mingjun, wait .Post-WIMP interface implicit interactions properties study [J]. computer science and exploration, 2007 (2)) propose the implicit interactions method of Post-WIMP, utilize identification technology, context-aware technology, user to revise skill Art supports the implicit interactions of Post-WIMP;The method makes user without paying close attention to executive mode and the process of interactive task, only needs Concern task itself, makes people in more naturally mode to complete interactive task.Xu Guang (refer to: Xu Guang, Tao Lin Close, Shi Yuanchun, etc. the man-machine interaction [J] under common calculating model. Chinese journal of computers, 2007,30 (7): 1041-1053) right Man-machine interaction under general calculation entironment is analysed in depth, it is proposed that the implicit interactions as major influence factors with user and environment Pattern.Man-machine interaction in physical space is divided into man-machine interaction based on physical space interface and calculates based on perceiving context by him Implicit interactions;The system that calculates in implicit interactions utilizes Context Knowledge explain the operation of user and understand, and it is made For the additional input to the system of calculating, thus complete interactive task.Perceive the extraction of contextual information and perceptual inference be realize hidden The basis that formula is mutual, Ye Xiyong (refer to: Ye Xiyong, Tao Linmi, kingdom is good for. and implicit interactions [C] based on action understanding // 7th harmonious man-machine environment associating academic conference (HHME2011) collection of thesis [oral] .2011) nurse man-machine interaction in old age Application proposes a kind of dynamic context model and ADL-DBN inference pattern, it is achieved the implicit interactions mode understood based on action; This interactive mode helps computer to understand the intention of people in the case of not disperseing the attention of people, completes interactive task.Kingdom builds (please Reference: kingdom builds, Tao Linmi. support the distributed vision system [J] of implicit HCI. Journal of Image and Graphics, 2010, 15 (8): 1133-1138) propose a kind of distributed vision system supporting implicit HCI, and applied at mini-session In scene.In the gesture interaction of view-based access control model, due to the ambiguity of Context Knowledge, cause the explanation of human action is just deposited Polysemy (refer to: Xu Guang. the body language in man-machine interaction understands [M]. Electronic Industry Press, 2014).Pass The rule-based knowledge representation and reasoning of system, it is impossible to effectively reflect the ambiguity of Interaction context information.Guan Zhiwei (refer to: Guan Zhiwei. the intelligent human-machine interaction [D] that user oriented is intended to. the Institute of Software, Chinese Academy of Science, 2000) first (please by FCM Reference: Kosko, Bart.Fuzzy cognitive maps [J] .International Journal of Man-Machine Studies, 1986,24 (1): 65-75) it is applied to knowledge fuzzy representation and the reasoning of man-machine interaction, have effectively achieved nature The high-rise cognitive process of man-machine interaction.But it is not provided that abundant and dynamic inference mechanism due to FCM, it is impossible to represent interactive conceptual Between cause effect relation estimate uncertainty (refer to: Ma Nan, Yang Ping Ru, Bao Hong, etc. Fuzzy Cognitive Map progress [J]. meter Calculation machine science, 2011,38 (10): 23-28).Papageorgiou E (refer to: Papageorgiou E, Stylios C, Groumpos P.Fuzzy Cognitive Map Learning Based on Nonlinear Hebbian Rule[M]//AI 2003: Advances in Artificial Intelligence.Springer Berlin Heidelberg, 2003:256-268) carry Go out a kind of dynamic fuzzy cognitive model being realized reasoning by a large amount of calculating processes, enhance the dynamic of concept node.
In sum, being currently based in the intelligent television gesture interaction of vision the problem being primarily present is user cognition load and operation Load is heavier.
Summary of the invention
It is an object of the invention to solve a difficult problem present in above-mentioned prior art, it is provided that a kind of implicit interactions towards intelligent television Method, is effectively improved the interactive experience of intelligent television user, reduces operational load and the cognitive load of user.
The present invention is achieved by the following technical solutions:
A kind of implicit interactions method towards intelligent television, including: user in real figure's behavioural information, detect customer location, And detect and identify user's gesture motion;Detect the functional status information of intelligent television, it is thus achieved that the explicit mutual letter of low level simultaneously Breath;User's figure's behavioural information functional status information real-time with intelligent television after processing combines, and foundation is based on user's row For the multi-level dynamic context inference pattern with intelligent television state, it is thus achieved that high-level implicit interactive information;Will be implicit mutual Information visualization, identifies the gesture motion that user completes under visualization implicit information instructs, and sets up the implicit expression of aobvious hidden information fusion Interbehavior model, completes interactive task.
Described customer location refers to the horizontal range of photographic head, angle on the relatively intelligent TV of user, and detection customer location is concrete such as Under:
Obtained the three-dimensional coordinate data of human body major joint point by Kinect, believe according to human body head node and gravity center of human body's coordinate Breath, determines the position of the relatively intelligent TV of human body.
Described detection includes identification and the knowledge of user's hand dynamic behaviour of user's hand static behavior with identification user's gesture motion , not specific as follows:
Realize detection and the segmentation at gesture position based on Kinect, obtain staff center-of-mass coordinate by OpenNI SDK, at staff Three dimensions in coordinate field extracts the position sold, and re-uses at the complexion model dividing method staff position to obtaining Reason, obtains preliminary staff image, preliminary staff image is carried out denoising, expansion, corrosion treatmentCorrosion Science, obtains final staff Image;
HCDF-H algorithm is used to carry out the identification of user's hand static behavior;
The identification of user's hand dynamic behaviour.
The identification that described employing HCDF-H algorithm carries out user's hand static behavior is specific as follows: first standardization images of gestures is 32*32 size, and calculate gesture focus point to gesture solstics as principal direction vector, along principal direction, images of gestures is divided into 8 Sub regions, obtains subregion pixel quantity, generates gesture coordinate points distribution characteristics vector, re-use class-Hausdorff away from From with the contrast of every kind of gesture in gesture template base, draw final recognition result.
The identification of described user's hand dynamic behaviour includes:
Step1. input images of gestures frame, space staff three-dimensional center-of-mass coordinate, initialize dynamic gesture type feature vector DGT;
Step2. according to gesture center-of-mass coordinate, with every continuous T two field picture calculate one-time continuous T two field picture static gesture motion away from From d, and update a d with continuous T two field picture;
If Step3. d < D, start to identify that the static gesture Gesture_start, D that trigger dynamic gesture are threshold value;
If Step4. Gesture_start identifies successfully, obtain static gesture center of mass point coordinate S gesture now and proceed to Step5;
Step5. carry out dynamic gesture centroid trajectory extraction, and track center of mass point three-dimensional coordinate is stored in data array;
The most again judge continuous T frame gesture motion distance d, if d < D, end of identification static gesture Gesture_end; Calculate data array length length;
If Step7. Gesture_end identifies successfully, obtain static gesture center-of-mass coordinate E now;
If Step8. length > 20, according to triggering static gesture center of mass point S of dynamic gesture, terminating the static hands of dynamic gesture The coordinate figure of gesture center of mass point E, it is judged that the dynamic gesture direction of motion, otherwise, judges d again, if d > D performs step9, otherwise Return step8;
Step9. judge dynamic gesture type, obtain corresponding gesture ID, and the key value revising corresponding dynamic gesture ID is 1, table Show that dynamic gesture ID identifies successfully, output dynamic gesture category IDs and the key value corresponding with ID;
Step10.DGT recovers to initialize..
Described foundation is based on user behavior and the multi-level dynamic context inference pattern of intelligent television state, it is thus achieved that high-level is hidden It is achieved in that containing interactive information
Interactive conceptual node is divided into four classes: user behavior interactive conceptual node, facility environment context status information interactive conceptual Node, exchange scenario event node, excite the interactive conceptual node set of operational semantics;
Interactive conceptual node set C represents the node set of multi-level dynamic context inference pattern, C=(U, S, E, A), wherein U For user behavior interactive conceptual node set, S is facility environment context status information interactive conceptual node set, and E is mutual feelings Scape event node set, A is the interactive conceptual node set exciting operational semantics;
Set U, S are known state parameters, and E, A are unknown parameters;During original state, at the beginning of detecting according to current time Beginning state value determines the concept value of each node in U, S, if detecting, event occurs, the most corresponding interactive conceptual nodal value It is set to 1, is otherwise 0;In E, A, each concept node value is initialized as 0;When multi-level dynamic context inference pattern is restrained During to a steady statue, it is thus achieved that the value of each interactive conceptual node under steady statue, based on multi-level dynamic context inference pattern Context Reasoning calculate process such as following formula:
A i t + 1 = f ( &Sigma; j = 1 i &NotEqual; j n W i j A j t ) - - - ( 5 )
f ( x ) = 1 / ( 1 + e - 1 2 x ) - - - ( 6 )
Wherein,It is interactive conceptual CiState value in the t+1 moment;It is interactive conceptual CjAt the value of t, WijIt is CiWith the weight of Cj, represent the causal connection intensity between interdependent node, obtain CDL-DFCM according to the weights on limit between interaction node Adjacency matrix W, W={W11, W12... Wnn, f represents threshold function table, and its effect is that the value of interactive conceptual is mapped to [0,1] Interval, is iteratively operating on this vector by W, and C reaches stable convergence state, i.e.
w i j t + 1 = w i j t + &lambda; ( &Delta;q i t + 1 &Delta;q j t + 1 ) - - - ( 7 )
(7) in formula,Represent WijThe weights of the t+1 time iteration, λ represents the learning rate factor, λ=0.1,
&Delta;q x t + 1 = A x t + 1 - A x t - - - ( 8 )
Represent the value variable quantity the t+1 time iteration of interactive conceptual node Cx,Represent node Cx the t time change Generation value;
The mutual intention that interactive conceptual set C is mapped on aware space gathers I, I=(I1, I2... In).Arbitrarily hand on C It is intended to I mutuallyx, its membership function muix(Ci), i=1,2 ..., n, wherein CiRepresent that the i-th in the C of interactive conceptual space is the most general Read node, μx(Ci) value in interval [0,1], μx(Ci) value reflection CiIt is under the jurisdiction of IxSubjection degree, value is 0 expression CiIt is not belonging to be intended to alternately Ix, IxIt is expressed as follows:
I x = &Sigma; i = 1 n &mu; x ( C i ) / C i , x = 1 , 2 , ... , n - - - ( 9 )
Mutual at aware space is intended in set I, there is mutex relation between mutual intention on space-time;According to formula (10) Calculate user view and describe factor FIx:
FI x = &Sigma; i = 1 n A i &mu; x ( C i ) , i = 1 , 2 , ... , n - - - ( 10 ) .
Described foundation shows the implicit interactions behavior model of hidden information fusion, completes interactive task and includes:
Detection intelligent television functional status context, the explicit behavioural information of user the most in real time;
S2. obtain dynamic context data, according to multi-level dynamic context model, carry out data fusion and feature extraction, and The state of detection low layer context events;
S3. detect and identify the type of T moment dynamic gesture, according to dynamic gesture type identification algorithm, it is thus achieved that T moment user Dynamic gesture type ID and key value;
S4. interactive conceptual set C is initialized., according to the state of low layer context events, arrange U in interactive conceptual set C, The initial value of each interactive conceptual node in S, the interactive conceptual nodal value that the state event that detects is corresponding is set to 1, is otherwise 0; Set E, in A, each interactive conceptual node initial value is set to 0;
S5. interactive conceptual set C interactive conceptual nodal value under convergence state is obtained according to adjacency matrix W and formula (5);
S6. calculate mutual intention in set according to formula (9) and (10) and be intended to I alternatelyx(x=1,2 ..., mutual intention n) Factor FI is describedxState value;Compare with being intended to describe the corresponding mutual mutual factor being intended in factor set FI, if FIx=FIconvergence, then activate and be intended to I alternatelyxCorresponding exchange scenario event and interactive operation, otherwise return S1;
Function menu corresponding to the exchange scenario event that S7. activated in the T moment shows in interface of intelligent television top, and computer Perform user and be intended to the interactive operation of correspondence alternately;
S8. detection T+1 moment user behavior, if user's gesture motion being detected, obtains the use in T+1 moment according to DGRA algorithm Family dynamic gesture type ID and key value, then perform S9;Otherwise, intelligent television keeps current functional status, and circulates Perform S8;
S9. calculate T+1 moment vector DGDM, calculate interactive task characteristic vector TI, if TI=TIx, x=1,2 ..., 6, then count Calculation machine is according to interactive task TIxComplete the feature operation of correspondence.
Calculating T+1 moment vector DGDM in described S9 is to utilize formula (12) calculated:
DGDM=(ID, posture, key) (12)
In formula (12), ID represents that dynamic gesture uniquely identifies, and posture represents the semanteme that dynamic gesture represents, key generation The recognition marks of table dynamic gesture.
Calculating interactive task characteristic vector TI in described S9 is achieved in that
In the T+1 moment, the interactive action with certain semantic is combined with system interface interactive information this moment, with aobvious, hidden The interactive map normal form of information fusion realizes the specific interactive task of user, and under specific interaction scenarios, interactive task TI constitutes mutual appointing Business set TIS, S=(TI1,TI2,…,TIn), by formula (11) interactive task characteristic vector TI
TIi=(DGDM, E, A) i=1,2 ..., n (11)
In formula (11), first characteristic vector DGDM represents that dynamic gesture behavioural information, second vectorial E represent by identifying The exchange scenario event gone out, the 3rd vectorial A represents the operation intention of user perceived.
Compared with prior art, the invention has the beneficial effects as follows:
(1) the inventive method is according to the behavior characteristics of user, establishes the intelligent television list gesture interaction prototype system of view-based access control model;
(2) propose multi-level context model and CDL-DFCM inference pattern, it is achieved that to the identification of exchange scenario event and User view perception;
(3) propose the implicit interactions behavior model of aobvious hidden information fusion and propose related algorithm, being effectively increased intelligent television and use The interactive experience at family, reduces operational load and the cognitive load of user.
Accompanying drawing explanation
Fig. 1 gesture motion statistical table
Fig. 2 dissimilar static gesture image
Fig. 3 dynamic gesture model decomposition figure
Fig. 4 gesture motion direction
Fig. 5 context model based on intelligent television gesture interaction
Fig. 6 dynamic context based on intelligent television gesture interaction CDL-DFCM model
Fig. 7 initializes weight matrix Winitial
Fig. 8 shows the implicit interactions behavior model of hidden information fusion
Fig. 9 operates accuracy rate comparison diagram
The gesture displacement that the operation of Figure 10 various functions is corresponding
Figure 11 dynamic gesture type identification rate
Figure 12 average operating time figure.
Detailed description of the invention
Below in conjunction with the accompanying drawings the present invention is described in further detail:
The present invention, from cognitive psychology angle, is intended to alternately by catching user, proposes one in conjunction with implicit interactions theory Multi-level dynamic context inference pattern based on DFCM and the implicit interactions behavior model of aobvious hidden information fusion.First, obtain in real time Take family figure's behavioural information, detect customer location, and detect and identify user's gesture motion;Detection intelligent television function shape simultaneously State, it is thus achieved that the explicit interactive information of low level.Secondly, the merit that user's figure's behavioural information after processing is real-time with intelligent television Status information can combine, set up dynamic context model;The differential Hebbian using weights iterative learning based on data moves State Fuzzy Cognitive Map DFCM (refer to: Zhang Yanli. the modeling of dynamical system based on Fuzzy Cognitive Map and control [D]. Dalian Polytechnics, 2012) multi-level dynamic context inference pattern obtains high-level implicit interactive information.Finally by implicit friendship Mutual information visualizes, and identifies the gesture motion that user completes under visualization implicit information instructs, and utilizes the hidden of aobvious hidden information fusion Formula interbehavior model, completes interactive task..
In intelligent television man-machine interaction, gesture motion interactively enters as a kind of non-precision, and the realization of the mutual purpose of user depends on completely Rely the pattern recognition rate in gesture motion.Which increase user operation and cognitive load.In this case, dynamic context to The understanding of family gesture motion plays an important role.The present invention is by the intelligent television gesture interaction scene analysis to view-based access control model, first First establish the multi-level context model based on user behavior and intelligent television state, it is achieved the data fusion of context and feature Extract;Secondly, designed and Implemented dynamic context CDL-DFCM inference pattern and the implicit interactions model of aobvious hidden information fusion, Identify exchange scenario event perception user view;Finally, it is proposed that context shows the implicit interactions algorithm of hidden information fusion.Real Testing result to show, compare with existing related algorithm, the present invention is at aspects such as operation accuracy rate, time overhead and gesture displacements It is obviously improved, and is effectively improved Consumer's Experience.
In Intelligent television interaction system, user completes corresponding interactive operation according to operation task.Therefore, the mutual of user needs The basis of the intelligent television gesture interaction system prototype of view-based access control model is set up in Seeking Truth.The present invention is according to first remote to view-based access control model User's daily habits sexual act in gesture interaction carries out statistical analysis, then by analyzing cognitive information therein, sets up user's row Thinking for model and prototype system devises following experiment.
Experiment 1
First, in the laboratory being provided with intelligent television, analog subscriber watches TV scene;Set up one based on Kinect Intelligent television remotely single gesture interaction model, but this model can not realize the real interactive operation with user, operates coverage For 1-3.5 rice.Secondly, inviting 50 student enrollment of different majors to participate in this experiment, every experiment participant has behaviour Make the operating experience of intelligent television or smart mobile phone, record what every experimenter made according to TV functions layout and natural reaction Gesture motion the most natural, the lightest, and use one-handed performance.Finally, the habitual action of counting user, carry out cognitive behavior Analyze, every kind of TV functions is operated with most habitual actions and sets up behavior model.The hands of experiment 1 offer view-based access control model Power-relation mutually in most popular 10 class gesture motion (refer to: Liu Xuejun. towards interactive TV gesture interaction systematic study with Realize [D]. Fudan University, 2013) and intelligent television function interface confession experiment participant's reference.Statistical result shows do not considering In the case of user operation purpose, obtain the number of times 4 kinds of gesture motion higher than 50%, as shown in Figure 1.
Experiment 2
On the basis of experiment 1, the present invention devises experiment 2.First, the intelligent television gesture about view-based access control model is devised The network surveying questionnaire of interactive operation.Secondly, according to the data results of questionnaire, the Intelligent electric of view-based access control model is developed Depending on gesture interaction prototype system.This questionnaire reclaims 157 parts altogether, the age the 75.16% of the total questionnaire number that accounts between 15-25 year, 25-60 year accounts for 24.85%.Sex ratio is the most impartial, and experiment will not be produced impact.In investigator, the people of 81.53% does not has Used the gesture interaction intelligent television of view-based access control model.In the investigation of gesture interaction intelligent television operation purpose, the people of 52.87% Think that main completing channel, volume, TV shutoff operation, the people of 45.86% are only used for playing gesture interaction game.The people of 56.45% Remote controller regulation volume, the mode of channel are felt dissatisfied.
Based on experiment 1 and experiment 2, the present invention devises the intelligent television list gesture interaction prototype system of view-based access control model, IHCI-smartTV.IHCI-smartTV includes the switching of intelligent television channel adjustment, volume adjusting, homepage function, gesture operation Switch, five functional modules of game controlled based on gesture, 8 kinds of gesture motion in design table 1 complete the friendship with intelligent television Task mutually.The present invention is mainly to the regulation of IHCI-smartTV mid band, volume adjusting, the gesture interaction of gesture operation switching function Study.Gesture operation switching function refers to that gesture operation switch can utilize gesture motion to control intelligent television and remove after opening Other operation beyond gesture operation switch, it is therefore an objective to avoid midas touch problem present in the gesture interaction of view-based access control model.
Dynamic gesture behavior model ID Semantic Interactive task
1 Pushing hands forward Activate gesture operation function
2 Wave in left and right Eject volume menu
3 Wave to the left Reduction volume/be adjusted to current channel
4 Wave to the right Increase volume/be adjusted to current channel
5 Wave up and down Eject channel menu
6 Upwards wave A upper channel/be adjusted to current volume
7 Wave downwards Next channel/be adjusted to current volume
8 Clench fist action Closedown gesture operation function/determine
Table 1
Implicit interactions behavior model:
The detection of human body explicit behavior contextual information and identification:
The explicit behavioural information of user refers to the human body behavioural information of the unique subscriber mutual with intelligent television, detect including customer location, The static detection with dynamic behaviour of user's hand and identification.Customer location detection refers to the water of photographic head on the relatively intelligent TV of user Flat distance, angle.The gestures detection of view-based access control model and identify and can be divided into following two: a kind of be made up of continuous hand motion Dynamic gesture (gesture), such as brandishing of hands;Two is static hand gestures (posture).Gesture motion in the present invention Context refers to motion and the geological information of hand motion, such as the static posture of hand, movement velocity, motion track information etc..
The research of human action behavior is required to data message under the dynamic and static state of collection human body accurately and timely, for this present invention Build experiment porch based on Kinect, and configure OpenNI SDK.15 main passes of human body can be obtained by Kinect The three-dimensional coordinate data of node, according to human body head node and gravity center of human body's coordinate information, it may be determined that the relatively intelligent TV of human body Position.Realize detection and the segmentation at gesture position based on Kinect, be to obtain staff center-of-mass coordinate by OpenNI SDK, Three dimensions in staff coordinate field extracts the position sold, and re-uses the complexion model dividing method staff position to obtaining Process, obtain preliminary staff image, preliminary staff image is carried out denoising, expansion, corrosion treatmentCorrosion Science, can obtain finally More satisfactory staff image.
Situation about being used in combination in actual applications with static gesture in view of dynamic gesture, and in the gesture interaction of view-based access control model The midas touch problem existed, the identification of static gesture is combined with identification by the present invention with the detection of dynamic gesture, sets up Dynamic gesture type identification model (dynamic gesture based on static gesture gesture recognition with the detection of action gesture motion detect model,DGDM).The formalized description of this model: DGDM=< ID, posture, Gesture_start, Gesture_end,orientation,key,data,length>.ID is the unique identifier of dynamic gesture;posture The explicit semantic information of mark gesture motion, such as: " clench fist, wave ";Gesture_start is the pre-of triggering dynamic gesture Definition static gesture;Gesture_end is the predefined static gesture terminating dynamic gesture;Orientation describes gesture three Direction of relative movement in dimension space;D is flag bit, when detected, is set to 1, is otherwise 0;Data is that storage is returned The one floating type array changing gesture center of mass motion trajectory coordinates.Length represents the number of image frames from start to end of dynamic gesture, It is used for describing the persistent period of dynamic gesture.Under mode of operation consciously, the dynamic gesture persistent period there are certain rule in user Rule property, can be obtained by statistics experiment.
Static gesture attitude employing HCDF-H algorithm (refer to: Yang Xuewen, Feng Zhiquan, Huang Zhongzhu, He Nana. combine gesture master The gesture identification [J] of direction and class-Hausdorff distance. computer-aided design and graphics journal, 2016,01:75-81) It is identified.First standardization images of gestures is 32*32 size, and calculates gesture focus point to gesture solstics as principal direction Vector, is divided into 8 sub regions along principal direction by images of gestures, obtains subregion pixel quantity, generates the distribution of gesture coordinate points Characteristic vector, re-uses class-Hausdorff distance and the contrast of every kind of gesture in gesture template base, draws final recognition result. The method, it can be avoided that the gesture impact that rotates, translate, scale, has higher efficiency and recognition accuracy.At view-based access control model Intelligent television gesture interaction in, the effective static gesture in interaction system of television is divided into three types, the five fingers are opened for 1, Clench fist is that 2, forefinger and middle finger are opened for 3, as shown in Figure 2.Shown in dynamic gesture exploded view 3 based on static gesture.
In intelligent television gesture interaction is tested, finding that each dynamic gesture starts front user can conscious adjustment static gesture. Within the time period adjusting static gesture, (time period adjusting static gesture refers to that user is adjusted to have tool from random static gesture The time difference of the desired static gesture that body is semantic), the barycenter displacement of static gesture keeps geo-stationary.Experiment is to 50 users' Dynamic gesture motion is analyzed, when counting user does dissimilar dynamic gesture in adjusting the static gesture time period static gesture The displacement of every two frame gesture barycenter.Using every continuous T frame images of gestures as a static adjustment time period, in continuous T frame Static gesture move distance meets condition threshold D.Using D and T as state jump condition, if in continuous T frame gesture move away from From d < D, then enter static gesture cognitive phase.The direction of motion (orientation) is to discriminate between the crucial letter of Different Dynamic gesture Breath, if static gesture center of mass point S by triggering dynamic gesture is that zero sets up coordinate system, itself and end dynamic gesture Static gesture center of mass point E walking direction relation is as shown in Figure 4.
Orientation can use Ori in formula (1) to describe;First, XOY face calculates vector according to S and EWith X Axle clamp tangent of an angle value, judges the motion of gesture above-below direction or left and right directions motion according to the absolute value of tangent value;Upper and lower To the concrete direction of positive negative judgement according to Two coordinate point Y-axis coordinate difference, left and right directions is sentenced according to Two coordinate point X-axis coordinate difference Disconnected concrete direction.Z-direction, gesture horizontal displacement threshold value absolute value is Z0.Its computing formula is:
f o = | t a n &theta; | &times; ( E . x - S . x ) , 1 > | t a n &theta; | > 0 | t a n &theta; | &times; ( E . y - S . y ) , | t a n &theta; | > 1 - - - ( 2 )
| tan &theta; | = | E . y - S . y E . x - S . x | - - - ( 3 )
According to DGDM, we may determine that dynamic gesture type (dynamic gesture type, DGT), and uses feature Vector DGT describes a kind of dynamic gesture, and different dynamic gestures can be according to different semantemes, beginning gesture, end gesture, side To and the persistent period describe.
DGT=(ID, posture, Gesture_start, Gesture_end, orientation, length) (4)
According to information above, the algorithm of dynamic gesture type identification (Dynamic gesture recognition algorithm, DGRA) step is as follows:
Input: images of gestures frame, space staff three-dimensional center-of-mass coordinate.
Output: dynamic gesture category IDs and the key value corresponding with ID.
Step1. DGT is initialized;
Step2. according to gesture center-of-mass coordinate, with every continuous T two field picture calculate one-time continuous T two field picture static gesture motion away from From d, and update a d with continuous T two field picture.
If Step3. d < D, start to identify the static gesture Gesture_start triggering dynamic gesture.
If Step4. Gesture_start identifies successfully, obtain static gesture center of mass point coordinate S gesture now and proceed to Step5。
Step5. carry out dynamic gesture centroid trajectory extraction, and track center of mass point three-dimensional coordinate is stored in data array.
The most again judge continuous T frame gesture motion distance d, if d < D, end of identification static gesture Gesture_end; Calculate data array length length.
If Step7. Gesture_end identifies successfully, obtain static gesture center-of-mass coordinate E now.
If Step8. length > 20, according to S, E coordinate figure, bring formula (1) into and judge the dynamic gesture direction of motion.Otherwise, Again judge d, if d > D performs step9, otherwise return step8.
Step9. judge dynamic gesture type according to formula (4), obtain corresponding gesture ID, and revise corresponding dynamic gesture ID Key value be 1, represent dynamic gesture ID identify successfully.
Step10.DGT recovers to initialize.
High-rise implicit information based on CDL-DFCM model perception and reasoning:
In man-machine interactive system, the implicit information of user's interbehavior is often hidden in the context of interaction scenarios.Intelligent electric Depending on the contextual information considering three kinds of forms main in interactive system, it is intelligent television state context, people and intelligent television respectively The context of association and the context relevant to user behavior.
(1) context relevant with intelligent television state, based on context can be divided into low-level devices functional status by hierarchical relationship, " such as: TV programme broadcast state, homepage handoff functionality state, holding state " and the high-rise exchange scenario event that obtained by reasoning with User view, such as: " TV is in gesture function state of activation ", " TV is in channel adjustment state ", " TV It is in volume adjusting state ".This kind of information relationship, to the understanding to human body, is the important evidence solving user behavior polysemy.
(2) context relevant with user includes relative position and the hand motion behavioural information of gravity center of human body.
(3) user and the associated context of intelligent television, be defined as customer location event, closes with the on off state of intelligent television Connection, such as: under television operations state, " user is in the effective opereating specification of TV ".This category information is that contact user behavior is upper and lower Literary composition and the tie of equipment state context.
Intelligent television gesture interaction scene context to view-based access control model, sets up multi-level context model.As shown in Figure 5.
In implicit interactions theory, context is the semantic gap that system bottom data and high-level user are intended to understanding.In order to identify Exchange scenario event and the action actively understanding user, user behavior and intelligent television state are analyzed by the present invention, according to upper Hereafter model proposes a kind of multi-level dynamic context inference pattern (CDL-DFCM) based on DFCM.CDL-DFCM can Realize the perception to operation intention of user, and in real time context data is processed in on-line checking mode.At CDL-DFCM mould In type, interactive conceptual node is divided into four classes: intelligent television state interactive conceptual node, describes relevant with intelligent television functional status Context;User behavior interactive conceptual node, describes user's gesture interaction action;Exchange scenario concept node, describes concrete The exchange scenario event of interactive task;The concept node of operational semantics, the operation describing user is intended to, with exchange scenario event phase Association.
For the basic operation demand of the intelligent television gesture interaction system of view-based access control model, the present invention is to IHCI-smartTV prototype system System mid band regulation, volume adjusting, the gesture interaction of gesture operation switching function are analyzed research, specifically include volume increase, Reduce operation, one, next regulation operation, gesture operation switching function on channel.The mesh of gesture operation switching function is set Be to realize and the smooth blend of other exchange channels, prevent to interfere.Interactive conceptual node set C represents CDL-DFCM Node set, C=(U, S, E, A).Wherein U is user behavior interactive conceptual node set, and S is facility environment context state Information interactive conceptual node set, E is exchange scenario event node set, and A is the interactive conceptual node set exciting operational semantics.
In the IHCI-smartTV man-machine interactive system of present invention research, concept node is listed as follows:
(1) interactive conceptual node listing:
{
// user action behavior interactive conceptual node set U
1, pushing hands forward (wave forward U1);
2, upwards wave (wave up U2);
3, wave downwards (wave down U3);
4, wave (wave to the left U4) to the left;
5, wave to the right (wave to the right U5);
6, clench fist (Fist U6);
7, customer location (U7)
// intelligent television status information interactive conceptual node set S
1, intelligent television program broadcast state (the playing state of smart TV S1);
2, gesture operation function state (the opening state of body gesture operating function —S2);
// exchange scenario event node E
1, channel functions operating interactive (E1);
2, volume functions operating interactive (E2);
3, gesture controls operating interactive (E3);
// excite interactive conceptual node set A of operational semantics
1, eject channel actions menu interface, and persistently carry out being transferred to the operation (A1) of a channel;
2, eject channel actions menu interface, and persistently carry out being transferred to the operation (A2) of next channel;
3, eject volume operation menu interface, and on the basis of former volume value, persistently reduce volume by certain amplitude, until receiving Order or mute state (A3) is terminated to volume down is little;
4, eject volume operation menu interface, and on the basis of former volume value, persistently increase volume by certain amplitude, until receiving Increase to volume and terminate order or max volume state (A4);
5, gesture operation function (A5) is opened;
6, gesture operation function (A6) is closed;
}
(2) interactive conceptual node incidence relation list:
{
S1 → U1: under TV programme broadcast state, user performs the probability of U1 action to be increased
S1 → U2: under TV programme broadcast state, user performs the probability of U2 action to be increased
S1 → U3: under TV programme broadcast state, user performs the probability of U3 action to be increased
S1 → U4: under TV programme broadcast state, user performs the probability of U4 action to be increased
S1 → U5: under TV programme broadcast state, user performs the probability of U5 action to be increased
S1 → U6: under TV programme broadcast state, user performs the probability of U6 action to be increased
S2 → U1: under gesture operation function open mode, user performs the probability of U1 action to be increased
S2 → U2: under gesture operation function open mode, user performs the probability of U2 action to be increased
S2 → U3: under gesture operation function open mode, user performs the probability of U3 action to be increased
S2 → U4: under gesture operation function open mode, user performs the probability of U4 action to be increased
S2 → U5: under gesture operation function open mode, user performs the probability of U5 action to be increased
S2 → U6: under gesture operation function open mode, user performs the probability of U6 action to be increased
U1 → E3: horizontal forward pushing hands causes the probability ejecting gesture interaction switch interactive menu to increase
U6 → E3: action of clenching fist causes the probability ejecting gesture interaction switch interactive menu to increase
U2 → E1: upwards wave to cause the probability ejecting channel menu to increase
U3 → E1: wave downwards to cause the probability ejecting channel menu to increase
U4 → E2: wave to the left to cause the probability ejecting volume menu to increase
U5 → E2: wave to the right to cause the probability ejecting volume menu to increase
After U7 → U1: user enters the effective operating area of gesture, the probability performing U1 increases
After U7 → U2: user enters the effective operating area of gesture, the probability performing U2 increases
After U7 → U3: user enters the effective operating area of gesture, the probability performing U3 increases
After U7 → U4: user enters the effective operating area of gesture, the probability performing U4 increases
After U7 → U5: user enters the effective operating area of gesture, the probability performing U5 increases
After U7 → U6: user enters the effective operating area of gesture, the probability performing U6 increases
After E1 → A1: channel operating function activates, the probability persistently regulating a supreme channel increases
After E1 → A2: channel operating function activates, persistently regulation increases to the probability of next channel
E2 → A3: after volume operation function activation, the probability that persistently regulation volume reduces increases
E2 → A4: after volume operation function activation, the probability that persistently regulation volume increases increases
E3 → A5: after ejecting gesture operation switch menu, the probability closing gesture operation function increases
E3 → A6: after ejecting gesture operation switch menu, the probability opening gesture operation function increases
A5 → S2: gesture motion is opened and is caused gesture operation on off state to change
}
According to above-mentioned analysis, set up CDL-DFCM model, as shown in Figure 6.
In CDL-DFCM model, set U, S are known state parameters, and E, A are unknown parameters.During original state, according to The initial state value that current time detects determines the concept value of each node in U, S, if detecting, event occurs, the most right The interactive conceptual nodal value answered is set to 1, is otherwise 0;In E, A, each concept node value is initialized as 0.Work as CDL-DFCM When converging to a steady statue, the value of each interactive conceptual node under steady statue can be obtained.Context based on CDL-DFCM pushes away Reason calculating process such as (5) formula:
A i t + 1 = f ( &Sigma; j = 1 i &NotEqual; j n W i j A j t ) - - - ( 5 )
f ( x ) = 1 / ( 1 + e - 1 2 x ) - - - ( 6 )
Wherein,It is interactive conceptual CiState value in the t+1 moment;It is interactive conceptual CjValue in t.Pass through Causal analysis and expertise analyze the incidence relation in intelligent television gesture interaction between interactive conceptual node, WijIt is CiAnd Cj Weight, represent the causal connection intensity between interdependent node, can get the neighbour of CDL-DFCM according to the weights on limit between interaction node Meet matrix W, W={W11, W12... Wnn, Fig. 7 is the initial adjacency matrix W obtained with expertise according to causal analysisinitial。 F represents threshold function table, and its effect is that the value of interactive conceptual is mapped to [0,1] interval.W is iteratively operating on this vector, and C reaches To stable convergence state, i.e.
w i j t + 1 = w i j t + &lambda; ( &Delta;q i t + 1 &Delta;q j t + 1 ) - - - ( 7 )
(7) in formula,Represent WijThe weights of the t+1 time iteration, λ represents the learning rate factor, λ=0.1.
&Delta;q x t + 1 = A x t + 1 - A x t - - - ( 8 )
Represent the value variable quantity the t+1 time iteration of interactive conceptual node Cx,Represent node Cx the t time change Generation value.
The mutual intention that interactive conceptual set C is mapped on aware space gathers I, I=(I1, I2... In).Arbitrarily hand on C It is intended to I mutuallyx, its membership function muix(Ci), i=1,2 ..., n, wherein CiRepresent that the i-th in the C of interactive conceptual space is the most general Read node.μx(Ci) value in interval [0,1], μx(Ci) value reflection CiIt is under the jurisdiction of IxSubjection degree, value is 0 expression CiIt is not belonging to be intended to alternately Ix。IxIt is expressed as follows:
I x = &Sigma; i = 1 n &mu; x ( C i ) / C i , x = 1 , 2 , ... , n - - - ( 9 )
Mutual at aware space is intended in set I, there is mutex relation between mutual intention on space-time, and the most each moment is only There may be a kind of probability maximum is intended to generation alternately.Hand under convergence state according to the subjection degree of each node in formula (9) Concept node state value mutually, calculates user view and describes the factor, calculates user view according to formula (10) and describes factor FIx:
FI x = &Sigma; i = 1 n A i &mu; x ( C i ) , i = 1 , 2 , ... , n - - - ( 10 )
The implicit interactions behavior model of aobvious hidden information fusion:
In interactive intelligent interaction system of television, telescreen is the direct perpetual object of user, traditional explicit interactive mode Middle user sends operational order according to television interfaces information, state according to set rules of interaction, and user operation commands is grasped with TV There is the relation followed in sb's footsteps between work, this causes the operation of user to bear the heaviest, reaches the average time of ideal operation effect relatively Long.Owing to user needs the operational motion remembered more, this has also increased the weight of user cognition load.The present invention proposes explicit mutual Aobvious hidden information fusion implicit interactions behavior model (EI-IBM) of implicit interactions pattern is merged, as shown in Figure 8 on the basis of pattern.With In the implicit interactions behavior model of the aobvious hidden information fusion that IHCI-smartTV prototype system builds, user with intelligent television system is Interaction agent.Implicit interactions is that a kind of sightless this invisibility is that the one of mutual both sides is indirectly connected with relation alternately, Interactive information has uncertainty and ambiguity.When user uses intelligent television pellucidly, user's energy is focusing more on alternately Task itself.Implicit interactions pattern, by merging multiple contextual information, analyzing, eliminates the discrimination between multiple contextual information Justice, it is achieved the understanding to user view, and provide a user with interactive service in active feedback mode.
The implicit interactions model of aobvious hidden information fusion is a kind of model innovation mutual to intelligent television, changes the most simple dependence The explicit interactive mode of user's direct command.The realization of this pattern includes procedure below:
(1) perception based on low layer context and reasoning.According to T moment user behavior context, intelligent television state context And the associated context of the two, by CDL-DFCM model, it is thus achieved that the implicit interactive information of T moment context.
(2) identify exchange scenario event and catch user view, and implicit interactive information is visualized.First, based on context Clue identifies the exchange scenario event in T moment, and perception user is in the mutual intention in T moment;Then, intelligent television is with implicit expression The mode of output actively provides the system interaction service relevant to T moment user view.System interaction service includes and user view Relevant information and the current functional status of intelligent television active accommodation user, and graphically, animation, word, color etc. Form realizes the visualization of implicit information, in the process without the pro-active intervention of user.Such as: " actively eject volume adjusting Menu ", " actively ejecting channel adjustment menu ", " program volume with certain amplitude continue enlarging state ".
(3) active under visualization implicit information instructs explicitly interactively enters.Under the guiding of visualization implicit information, Yong Hugen According to the system service interface information in T+1 moment, actively send interactive command to television system having the interactive action of certain semantic.
(4) realization of interactive task.In the T+1 moment, by mutual with system interface this moment for the interactive action with certain semantic Information combines, with specific interactive task aobvious, that the interactive map normal form of hidden information fusion realizes user.Under specific interaction scenarios Interactive task (task of interaction, TI) constitutes interactive task set TIS, S=(TI1,TI2,…,TIn).By mesh Mark interactive task describes by characteristic vector TI.
TIi=(DGDM, E, A) i=1,2 ..., n (11)
(11) in formula, first characteristic vector DGDM represents that dynamic gesture behavioural information, second vectorial E represent by identifying The exchange scenario event gone out, the 3rd vectorial A represents the operation intention of user perceived.
DGDM=(ID, posture, key) (12)
(12) in formula, ID represents that dynamic gesture uniquely identifies, and posture represents the semanteme that dynamic gesture represents, and key represents The recognition marks of dynamic gesture.
In present invention research, IHCI-smartTV system exist 6 kinds of users be intended to alternately, use formula (9), (10) can Calculate user view under CDL-DFCM model convergence state and describe factor FIconvergenceValue, as shown in table 2, CDL-DFCM Under model convergence state, user view each node state value is as shown in table 3.
Table 2
I S1 S2 U7 Ui Ei Ai
I1 0.6656 0.6656 0.6305 0.6654 0.6809 0.6024
I2 0.6656 0.6656 0.6305 0.6654 0.6809 0.6024
I3 0.6656 0.6656 0.6305 0.6654 0.6809 0.6024
I4 0.6656 0.6656 0.6305 0.6654 0.6809 0.6024
I5 0.6656 0.6656 0.6305 0.6661 0.6864 0.6024
I6 0.6668 0.6668 0.6307 0.6663 0.6865 0.6024
Table 3
Aobvious hidden information fusion implicit interactions algorithm based on intelligent television gesture interaction context:
From user self and intelligent television, the present invention, by the analysis to Interaction context, utilizes CDL-DFCM model to obtain Implicit mutual clue, and the intelligence of user and intelligent television is achieved by the implicit interactions behavior model of aobvious hidden information fusion Can, harmonious, the most mutual.On this basis, the present invention propose dynamic context based on intelligent television gesture interaction show The implicit interactions algorithm (Explicit and Implicit Interaction algorithm, EIIA) of hidden information fusion.
Algorithm core thinking is: first, obtains user's corelation behaviour information according to user behavior information model, according to behavior characteristics Vector identifies the explicit behavioural information of user;Detection intelligent television functional status, completes the extraction of low layer contextual information simultaneously.So After, according to CDL-DFCM models treated low layer dynamic context, obtain the implicit interactive information of high level and realize the knowledge of exchange scenario event Not and perception operation intention of user, and implicit interactive information is visualized.Finally, user is according to the guiding of visualization implicit information Make the most explicit interactive action, complete concrete interactive task.The implicit interactions arthmetic statement of aobvious hidden information fusion is as follows:
Detection intelligent television functional status context, the explicit behavioural information of user the most in real time.
Step2. obtain dynamic context data, according to multi-level dynamic context model, carry out data fusion and feature extraction, And detect the state of low layer context events.
Step3. detect and identify the type of T moment dynamic gesture, according to dynamic gesture type identification (DGRA) algorithm, it is thus achieved that Dynamic gesture type ID of T moment user and key value.
Step4. interactive conceptual set C is initialized.According to the state of low layer context events, arrange U in interactive conceptual set C, The initial value of each interactive conceptual node in S, the interactive conceptual nodal value that the state event that detects is corresponding is set to 1, is otherwise 0; Set E, in A, each interactive conceptual node initial value is set to 0.
Step5. interactive conceptual set C is obtained at convergence state (i.e. according to adjacency matrix W and formula (5)) under Interactive conceptual nodal value.
Step6. calculate mutual intention in set according to formula (9) and (10) and be intended to I alternatelyx(x=1,2 ..., n) mutual It is intended to describe factor FIxState value;Compare with being intended to describe the corresponding mutual mutual factor being intended in factor set FI, if FIx=FIconvergence(such as table 2) then activates and is intended to I alternatelyxCorresponding exchange scenario event and interactive operation, otherwise return step1.
Step7. the visualization of implicit information.Function menu corresponding to the exchange scenario event that activated in the T moment is explicitly at Intelligent electric Visual interface top, and computer perform user be intended to alternately correspondence interactive operation.
Step8. detection T+1 moment user behavior, if user's gesture motion being detected, obtains the T+1 moment according to DGRA algorithm User's dynamic gesture type ID and key value, perform step9;Otherwise, intelligent television keeps current functional status, and follows Ring performs step8.
Step9. according to formula (12) calculate T+1 moment vector DGDM, in conjunction with formula (11) calculate interactive task feature to Amount TI, if TI=TIx(x=1,2 ..., 6) (such as table 2), then computer is according to interactive task TIxComplete the function behaviour of correspondence Make.
Experimental result and analysis:
The present invention, with ICHI-smartTV as experiment porch, devises new intelligent television interactive mode.The present invention selects Intelligent electric Switching three class functions test depending on the channel in man-machine interaction, volume, gesture operation, concrete operations include: on channel one, Channel is next, volume increases, volume reduces, gesture operation is opened, gesture operation is closed.The present invention will not consider context Dynamic gesture identification method (HCDF-H applies and tests as a comparison in ICHI-smartTV prototype system.
Experimental result is as follows:
The present invention selects 10 experimenters, and every experimenter is according to the gesture-duty mapping model completing channel of table 1, sound Amount, gesture operation switch three class functions.Requirement of experiment experimenter stands and is positioned at first 2.5 meters of intelligent television and sentences one hand and complete Gesture motion.Operating process, as a example by volume adjusting, when user wants to increase volume, sends the related gesture action that volume increases, Eject volume menu after intelligent television perception user view, then persistently increase volume with certain amplitude, when user is to currently Volume when pleasing oneself, send stopping audio volume command, now volume increases task and terminates.Every experiment people in experiment every time Member completes: the traversal of (1) channel 1 to 10 increases regulation, then completes to regulate from the reduction of channel 10 to 1;(2) from sound The traversal of amount 30 to 60 increases, reduces regulation;(3) gesture operation open and close functions.A upper channel refers to frequency Road is adjusted to channel 10 from 1.Every experimenter does 5 experiments.The Average Accuracy of feature operation is as shown in Figure 9.According to The meansigma methods of the number of image frames of experimenter's gesture path measure interaction completes each operating gesture average mobile away from From, the gesture displacement of every kind of TV functions operation is as shown in Figure 8.Fig. 9 is the dynamic gesture average recognition rate of DGRA algorithm. In the case of intelligent television response time is consistent, adds up two kinds of algorithms and realize the average time required for identical function operation, its Middle system response time is 2.38s, as shown in Figure 10.
Experimental analysis is as follows:
Experimental situation a: PC, Intel (R) Xeon (R) CPU, 2.67GHz, 8G internal memory;Visual input device is: Kinect sensor.
Interpretation:
As shown in Figure 9, compared with HCDF-H algorithm, inventive algorithm EIIA has higher operation accuracy rate.Permissible by Figure 10 Finding out, in intelligent television gesture interaction based on EIIA algorithm, user just can complete operation times with less gesture displacement Business, completes the distance minimizing about 60% that identical interactive task user's gesture moves compared with HCDF-H algorithm.In present invention experiment, Channel based on EIIA algorithm increases or reduces in operation, and user only needs one start channel adjustment order and terminate channel adjustment life Make two gesture motion just can complete the traversal regulation of 9 channels in test.And need 9 gestures based on HCDF-H algorithm Action just can complete identical channel operation.In like manner, volume adjusting is also such.As shown in Figure 12, based on EIIA algorithm Intelligent television gesture interaction operates at channel, the function of this regular operation of volume operation greatly reduces operating time of user, Gesture motion is opened, close these functions infrequently used then do not have temporal advantage.Figure 11 is from the cognitive heart Angle of science, according to the discrimination of user's gesture operation that intelligent television interaction scenarios is set up, discrimination is all more than 91%, simultaneously These gesture motion are the gesture motion of user habit, have relatively low cognitive load and operational load, meet intelligent television Interaction demand.
Experiment Algorithm Analysis:
EIIA algorithm combined with intelligent TV Interaction context on the basis of dynamic hand gesture recognition algorithm DGRA algorithm proposes new friendship Pattern mutually.First, intelligent television interactive user habituation gesture motion behavior model is set up according to cognitive psychology;Secondly, divide The behavioural information of the mutual middle user of analysis and intelligent television status information context, utilize the operation of CDL-DFCM model perception user to anticipate Figure;Finally, by the implicit interactions Pattern completion interactive task of aobvious hidden information fusion.EIIA algorithm substantially reduces the behaviour of user Make time and gesture displacement, thus reduce the operational load of user.And habitually gesture motion also helps user to reduce Cognitive load in intelligent television gesture interaction, thus improve Consumer's Experience.
Technique scheme is one embodiment of the present invention, for those skilled in the art, public in the present invention On the basis of having opened application process and principle, it is easy to make various types of improvement or deformation, be not limited solely to the present invention above-mentioned Method described by detailed description of the invention, the most previously described mode is the most preferred, and the most restrictive meaning.

Claims (9)

1. the implicit interactions method towards intelligent television, it is characterised in that: described method includes: obtain in real time User's figure's behavioural information, detects customer location, and detects and identify user's gesture motion;Detect intelligence simultaneously The functional status information of TV, it is thus achieved that the explicit interactive information of low level;User's figure's behavior after processing The information functional status information real-time with intelligent television combines, and sets up based on user behavior and intelligent television shape The multi-level dynamic context inference pattern of state, it is thus achieved that high-level implicit interactive information;Will implicit mutual letter Breath visualization, identifies the gesture motion that user completes under visualization implicit information instructs, and sets up aobvious hidden information The implicit interactions behavior model merged, completes interactive task.
Implicit interactions method towards intelligent television the most according to claim 1, it is characterised in that: described use Position, family refers to the horizontal range of photographic head, angle on the relatively intelligent TV of user, and detection customer location is concrete As follows:
The three-dimensional coordinate data of human body major joint point is obtained, according to human body head node and people by Kinect Body weight heart coordinate information, determines the position of the relatively intelligent TV of human body.
Implicit interactions method towards intelligent television the most according to claim 2, it is characterised in that: described inspection Survey and identify that user's gesture motion includes identification and the knowledge of user's hand dynamic behaviour of user's hand static behavior , not specific as follows:
Realize detection and the segmentation at gesture position based on Kinect, obtain staff barycenter by OpenNI SDK Coordinate, the three dimensions in staff coordinate field extracts the position sold, re-uses complexion model segmentation side The staff position obtained is processed by method, obtains preliminary staff image, carries out preliminary staff image Denoising, expansion, corrosion treatmentCorrosion Science, obtain final staff image;
HCDF-H algorithm is used to carry out the identification of user's hand static behavior;
The identification of user's hand dynamic behaviour.
Implicit interactions method towards intelligent television the most according to claim 3, it is characterised in that adopt described in: The identification carrying out user's hand static behavior with HCDF-H algorithm is specific as follows: first standardization images of gestures is 32*32 size, and calculate gesture focus point to gesture solstics as principal direction vector, along principal direction by gesture Image is divided into 8 sub regions, obtains subregion pixel quantity, generates gesture coordinate points distribution characteristics vector, Re-use class-Hausdorff distance and the contrast of every kind of gesture in gesture template base, draw final recognition result.
Implicit interactions method towards intelligent television the most according to claim 4, it is characterised in that: described use The identification of family hand dynamic behaviour includes:
Step1. input images of gestures frame, space staff three-dimensional center-of-mass coordinate, initialize dynamic gesture type special Levy vector DGT;
Step2. according to gesture center-of-mass coordinate, the quiet of one-time continuous T two field picture is calculated with every continuous T two field picture State gesture motion distance d, and update a d with continuous T two field picture;
If Step3. d < D, start to identify that the static gesture Gesture_start, D that trigger dynamic gesture are threshold Value;
If Step4. Gesture_start identifies successfully, obtain static gesture center of mass point coordinate S hands now Gesture also proceeds to Step5;
Step5. carry out dynamic gesture centroid trajectory extraction, and track center of mass point three-dimensional coordinate is stored in data In array;
The most again judge continuous T frame gesture motion distance d, if d < D, end of identification static gesture Gesture_end;Calculate data array length length;
If Step7. Gesture_end identifies successfully, obtain static gesture center-of-mass coordinate E now;
If Step8. length > 20, according to triggering static gesture center of mass point S of dynamic gesture, terminating dynamically The coordinate figure of static gesture center of mass point E of gesture, it is judged that the dynamic gesture direction of motion, otherwise, judges d again, If d > D performs step9, otherwise return step8;
Step9. judge dynamic gesture type, obtain corresponding gesture ID, and revise corresponding dynamic gesture ID's Key value is 1, represents that dynamic gesture ID identifies successfully, exports dynamic gesture category IDs and corresponding with ID Key value;
Step10.DGT recovers to initialize.
Implicit interactions method towards intelligent television the most according to claim 5, it is characterised in that build described in: The multi-level dynamic context inference pattern of user behavior and the intelligent television state of being based on, it is thus achieved that high-level Implicit interactive information is achieved in that
Interactive conceptual node is divided into four classes: user behavior interactive conceptual node, facility environment context state Information interactive conceptual node, exchange scenario event node, excite the interactive conceptual node set of operational semantics;
Interactive conceptual node set C represents the node set of multi-level dynamic context inference pattern, C=(U, S, E, A), wherein U is user behavior interactive conceptual node set, and S is facility environment context state Information interactive conceptual node set, E is exchange scenario event node set, and A is excite operational semantics mutual Concept node set;
Set U, S are known state parameters, and E, A are unknown parameters;During original state, according to current time The initial state value that detects determines the concept value of each node in U, S, if detecting, event occurs, then with The interactive conceptual nodal value of correspondence be set to 1, be otherwise 0;In E, A, each concept node value is initialized as 0;When multi-level dynamic context inference pattern converges to a steady statue, it is thus achieved that respectively hand under steady statue The value of concept node mutually, it is as follows that Context Reasoning based on multi-level dynamic context inference pattern calculates process Formula:
A i t + 1 = f ( &Sigma; j = 1 i &NotEqual; j n W i j A j t ) - - - ( 5 )
f ( x ) = 1 / ( 1 + e - 1 2 x ) - - - ( 6 )
Wherein,It is interactive conceptual CiState value in the t+1 moment;It is interactive conceptual CjIn t Value, WijIt is CiWith the weight of Cj, represent the causal connection intensity between interdependent node, according to interaction node Between the weights on limit obtain the adjacency matrix W, W={W of CDL-DFCM11, W12... Wnn, f represents threshold value letter Number, its effect is that the value of interactive conceptual is mapped to [0,1] interval, and W is iteratively operating on this vector, and C reaches To stable convergence state, i.e.
w i j t + 1 = w i j t + &lambda; ( &Delta;q i t + 1 &Delta;q j t + 1 ) - - - ( 7 )
(7) in formula,Represent WijThe weights of the t+1 time iteration, λ represents the learning rate factor, λ=0.1,
&Delta;q x t + 1 = A x t + 1 - A x t - - - ( 8 )
Represent the value variable quantity the t+1 time iteration of interactive conceptual node Cx,Represent that node Cx exists The iterative value of the t time;
The mutual intention that interactive conceptual set C is mapped on aware space gathers I, I=(I1, I2... In)。 Arbitrarily I it is intended to alternately on Cx, its membership function muix(Ci), i=1,2 ..., n, wherein CiRepresent mutual general Read the i-th interactive conceptual node in the C of space, μx(Ci) value in interval [0,1], μx(Ci) value anti- Reflect CiIt is under the jurisdiction of IxSubjection degree, value is 0 expression CiIt is not belonging to be intended to alternately Ix, IxIt is expressed as follows:
I x = &Sigma; i = 1 n &mu; x ( C i ) / C i , x = 1 , 2 , ... , n - - - ( 9 )
Mutual at aware space is intended in set I, there is mutex relation between mutual intention on space-time; Calculate user view according to formula (10) and describe factor FIx:
FI x = &Sigma; i = 1 n A i &mu; x ( C i ) , i = 1 , 2 , ... , n - - - ( 10 ) .
Implicit interactions method towards intelligent television the most according to claim 6, it is characterised in that build described in: The implicit interactions behavior model of vertical aobvious hidden information fusion, completes interactive task and includes:
Detection intelligent television functional status context, the explicit behavioural information of user the most in real time;
S2. obtain dynamic context data, according to multi-level dynamic context model, carry out data fusion with Feature extraction, and detect the state of low layer context events;
S3. detect and identify the type of T moment dynamic gesture, according to dynamic gesture type identification algorithm, obtain Obtain dynamic gesture type ID and the key value of T moment user;
S4. interactive conceptual set C is initialized., according to the state of low layer context events, interactive conceptual is set The initial value of each interactive conceptual node in U, S in set C, the interactive conceptual that the state event that detects is corresponding Nodal value is set to 1, is otherwise 0;Set E, in A, each interactive conceptual node initial value is set to 0;
S5. mutual under convergence state of interactive conceptual set C is obtained according to adjacency matrix W and formula (5) Concept node value;
S6. calculate mutual intention in set according to formula (9) and (10) and be intended to I alternatelyx(x=1,2 ..., n) Mutual intention factor FI is describedxState value;Corresponding mutual intention in factor set FI is described with intention The mutual factor compares, if FIx=FIconvergence, then activate and be intended to I alternatelyxCorresponding exchange scenario event is with mutual Operation, otherwise returns S1;
Function menu corresponding to the exchange scenario event that S7. activated in the T moment shows and pushes up most at interface of intelligent television Layer, and computer perform user be intended to alternately correspondence interactive operation;
S8. detection T+1 moment user behavior, if user's gesture motion being detected, obtains according to DGRA algorithm User's dynamic gesture type ID in T+1 moment and key value, then perform S9;Otherwise, intelligent television keeps Current functional status, and circulate execution S8;
S9. calculate T+1 moment vector DGDM, calculate interactive task characteristic vector TI, if TI=TIx, X=1,2 ..., 6, then computer is according to interactive task TIxComplete the feature operation of correspondence.
Implicit interactions method towards intelligent television the most according to claim 7, it is characterised in that: described S9 In calculating T+1 moment vector DGDM be to utilize formula (12) calculated:
DGDM=(ID, posture, key) (12)
In formula (12), ID represents that dynamic gesture uniquely identifies, and posture represents what dynamic gesture represented Semanteme, key represents the recognition marks of dynamic gesture.
Implicit interactions method towards intelligent television the most according to claim 8, it is characterised in that: described S9 In calculating interactive task characteristic vector TI be achieved in that
In the T+1 moment, the interactive action with certain semantic is tied mutually with system interface interactive information this moment Close, with specific interactive task aobvious, that the interactive map normal form of hidden information fusion realizes user, specific mutual field Under scape, interactive task TI constitutes interactive task set TIS, S=(TI1,TI2,…,TIn), with formula (11) Interactive task characteristic vector TI
TIi=(DGDM, E, A) i=1,2 ..., n (11)
In formula (11), first characteristic vector DGDM represents dynamic gesture behavioural information, second vectorial E Representing by the exchange scenario event identified, the 3rd vectorial A represents the operation intention of user perceived.
CN201610237422.XA 2016-04-15 2016-04-15 A kind of implicit interactions method towards smart television Expired - Fee Related CN105915987B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610237422.XA CN105915987B (en) 2016-04-15 2016-04-15 A kind of implicit interactions method towards smart television

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610237422.XA CN105915987B (en) 2016-04-15 2016-04-15 A kind of implicit interactions method towards smart television

Publications (2)

Publication Number Publication Date
CN105915987A true CN105915987A (en) 2016-08-31
CN105915987B CN105915987B (en) 2018-07-06

Family

ID=56746268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610237422.XA Expired - Fee Related CN105915987B (en) 2016-04-15 2016-04-15 A kind of implicit interactions method towards smart television

Country Status (1)

Country Link
CN (1) CN105915987B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601217A (en) * 2016-12-06 2017-04-26 北京邮电大学 Interactive-type musical instrument performing method and device
CN107272424A (en) * 2017-06-13 2017-10-20 三星电子(中国)研发中心 The method and apparatus for controlling intelligent appliance
CN107918792A (en) * 2016-10-10 2018-04-17 九阳股份有限公司 A kind of initialization exchange method of robot
CN108052202A (en) * 2017-12-11 2018-05-18 深圳市星野信息技术有限公司 A kind of 3D exchange methods, device, computer equipment and storage medium
CN110996052A (en) * 2019-11-26 2020-04-10 绍兴天宏激光科技有限公司 Emergency alarm method and system based on image recognition
CN111309990A (en) * 2018-12-12 2020-06-19 北京嘀嘀无限科技发展有限公司 Statement response method and device
CN113380088A (en) * 2021-04-07 2021-09-10 上海中船船舶设计技术国家工程研究中心有限公司 Interactive simulation training support system
CN114281185A (en) * 2021-04-25 2022-04-05 北京壹体体育产业发展有限公司 Body state recognition and body feeling interaction system and method based on embedded platform

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508544A (en) * 2011-10-24 2012-06-20 四川长虹电器股份有限公司 Intelligent television interactive method based on projection interaction
CN103150024A (en) * 2013-04-03 2013-06-12 施海昕 Computer operation method
CN105007525A (en) * 2015-06-09 2015-10-28 济南大学 Interactive situation event correlation smart perception method based on application of smart television

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508544A (en) * 2011-10-24 2012-06-20 四川长虹电器股份有限公司 Intelligent television interactive method based on projection interaction
CN103150024A (en) * 2013-04-03 2013-06-12 施海昕 Computer operation method
CN105007525A (en) * 2015-06-09 2015-10-28 济南大学 Interactive situation event correlation smart perception method based on application of smart television

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918792A (en) * 2016-10-10 2018-04-17 九阳股份有限公司 A kind of initialization exchange method of robot
CN107918792B (en) * 2016-10-10 2022-06-17 九阳股份有限公司 Robot initialization interaction method
CN106601217A (en) * 2016-12-06 2017-04-26 北京邮电大学 Interactive-type musical instrument performing method and device
CN107272424A (en) * 2017-06-13 2017-10-20 三星电子(中国)研发中心 The method and apparatus for controlling intelligent appliance
CN108052202A (en) * 2017-12-11 2018-05-18 深圳市星野信息技术有限公司 A kind of 3D exchange methods, device, computer equipment and storage medium
CN108052202B (en) * 2017-12-11 2021-06-11 深圳市星野信息技术有限公司 3D interaction method and device, computer equipment and storage medium
CN111309990A (en) * 2018-12-12 2020-06-19 北京嘀嘀无限科技发展有限公司 Statement response method and device
CN111309990B (en) * 2018-12-12 2024-01-23 北京嘀嘀无限科技发展有限公司 Statement response method and device
CN110996052A (en) * 2019-11-26 2020-04-10 绍兴天宏激光科技有限公司 Emergency alarm method and system based on image recognition
CN113380088A (en) * 2021-04-07 2021-09-10 上海中船船舶设计技术国家工程研究中心有限公司 Interactive simulation training support system
CN114281185A (en) * 2021-04-25 2022-04-05 北京壹体体育产业发展有限公司 Body state recognition and body feeling interaction system and method based on embedded platform
CN114281185B (en) * 2021-04-25 2023-10-27 浙江壹体科技有限公司 Body state identification and somatosensory interaction system and method based on embedded platform

Also Published As

Publication number Publication date
CN105915987B (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN105930785B (en) Intelligent concealed-type interaction system
CN105915987B (en) A kind of implicit interactions method towards smart television
Qi et al. Human-centric indoor scene synthesis using stochastic grammar
Stern et al. Optimal consensus intuitive hand gesture vocabulary design
Francis et al. Core challenges in embodied vision-language planning
Yin et al. A high-performance training-free approach for hand gesture recognition with accelerometer
Araki et al. Online object categorization using multimodal information autonomously acquired by a mobile robot
Staretu et al. Leap motion device used to control a real anthropomorphic gripper
CN105007525A (en) Interactive situation event correlation smart perception method based on application of smart television
Wu et al. Beyond remote control: Exploring natural gesture inputs for smart TV systems
Prasad et al. A wireless dynamic gesture user interface for HCI using hand data glove
Li et al. AI driven human–computer interaction design framework of virtual environment based on comprehensive semantic data analysis with feature extraction
Liu et al. An accelerometer-based gesture recognition algorithm and its application for 3D interaction
CN105929946B (en) A kind of natural interactive method based on virtual interface
Yoon et al. Adaptive mixture-of-experts models for data glove interface with multiple users
Ieronutti et al. A virtual human architecture that integrates kinematic, physical and behavioral aspects to control h-anim characters
Li et al. [Retracted] Human Motion Representation and Motion Pattern Recognition Based on Complex Fuzzy Theory
Takano et al. Construction of a space of motion labels from their mapping to full-body motion symbols
Freedman et al. Temporal and object relations in unsupervised plan and activity recognition
Lin et al. Action recognition for human-marionette interaction
Feng et al. FM: Flexible mapping from one gesture to multiple semantics
Yan et al. AGRMTS: A virtual aircraft maintenance training system using gesture recognition based on PSO‐BPNN model
Zhu Visual commonsense reasoning: Functionality, physics, causality, and utility
Celikkanat et al. Integrating spatial concepts into a probabilistic concept web
Zhao et al. Control virtual human with speech recognition and gesture recognition technology

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Feng Zhiquan

Inventor after: Xu Zhipeng

Inventor after: Ai Changsheng

Inventor after: Wei Jun

Inventor after: Li Yingjun

Inventor after: Li Jianxin

Inventor after: Xie Wei

Inventor after: Zhang Kai

Inventor before: Feng Zhiquan

Inventor before: Xu Zhipeng

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180706

Termination date: 20190415

CF01 Termination of patent right due to non-payment of annual fee