CN115407867A - Intelligent interaction system based on multiple sensors - Google Patents

Intelligent interaction system based on multiple sensors Download PDF

Info

Publication number
CN115407867A
CN115407867A CN202210861778.6A CN202210861778A CN115407867A CN 115407867 A CN115407867 A CN 115407867A CN 202210861778 A CN202210861778 A CN 202210861778A CN 115407867 A CN115407867 A CN 115407867A
Authority
CN
China
Prior art keywords
module
user
optimization
sensor
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210861778.6A
Other languages
Chinese (zh)
Other versions
CN115407867B (en
Inventor
齐红心
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xia Qianming
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202210861778.6A priority Critical patent/CN115407867B/en
Publication of CN115407867A publication Critical patent/CN115407867A/en
Application granted granted Critical
Publication of CN115407867B publication Critical patent/CN115407867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses an intelligent interaction system based on multiple sensors, which comprises a sensor combination module, a user model building module and an interaction optimization module, wherein the sensor combination module is used for collecting optimizable data of a user according to multiple sensors, the user model building module is used for building a use scene model of the user by combining the collected data, the interaction optimization module is used for carrying out visual optimization on an interaction scene and an interaction mode of the user, the user model building module comprises a manual input module, a sensor data integration module, an identity classification module and a demand analysis module, the sensor data integration module is used for analyzing and integrating data collected by the sensors, the identity classification module is used for carrying out identity classification according to habits of the sensor and the user, and the demand analysis module is used for analyzing an interaction demand of the user.

Description

Intelligent interaction system based on multiple sensors
Technical Field
The invention relates to the technical field of intelligent interaction, in particular to an intelligent interaction system based on multiple sensors.
Background
With the rapid development of smart homes, the smart televisions have higher and higher occupation ratios in the lives of people, particularly, the smart televisions of digital television signal types bound with various large operators are developed at a rapid pace, although the development of mobile electronic devices is well-established, the smart televisions can bring a sense of convergence to users, however, the information provided by the smart televisions is excessively complex, the information on one screen is difficult to be received by the users quickly, the exploration enthusiasm of the users is reduced due to excessive information, the user experience is influenced while the resources are wasted, meanwhile, the functions of the smart televisions are more and more abundant, the complexity of system operation is correspondingly increased, the population structure of China is complex, television manufacturers are numerous, the resources are distributed, the experience effect of the smart televisions is not ideal, the utilization rate of the resources and the information of the smart televisions is improved, and the user experience is a pain point which needs to be solved urgently for the smart televisions, and therefore, an intelligent interaction system based on multiple sensors is necessary for designing reasonable interaction logic and improving the user experience.
Disclosure of Invention
The invention aims to provide an intelligent interactive system based on multiple sensors to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: the utility model provides an intelligence interactive system based on multisensor, includes sensor composite module, user model construction module, interaction optimization module, sensor composite module is used for gathering user's optimizable data according to a plurality of sensors, user model construction module is used for combining the use scene model of gathering data construction user, interaction optimization module is used for carrying out visual optimization to user's interactive scene and interactive mode, user model construction module includes manual input module, sensor data integration module, identity classification module, demand analysis module, manual input module is used for typing single user's identity information, use habit in to the system, sensor data integration module is used for carrying out analysis integration to sensor data collection, identity classification module is used for carrying out identity classification according to sensor data collection with user habit, demand analysis module is used for analyzing user's interaction demand.
According to the technical scheme, the sensor combination module comprises a deep sensing camera module, a body movement recording sensor module, a behavior recording module and a data uploading module, the deep sensing camera module is used for verifying the identity of a user and watching the behavior record, the body movement recording sensor module is used for identifying and recording the body movement condition of the user, the behavior recording module is used for recording the behavior characteristics of the user when the user watches TV, and the data uploading module is used for transmitting data to a block chain for storage.
According to the technical scheme, the interaction optimization module comprises a focus display module, a search profit calculation module, a classification redrawing module and an information architecture optimization module, the focus display module is used for changing focus display layout by butting a television display driving board, the search profit calculation module is used for calculating profits of a user in a search process, the classification redrawing module is used for performing targeted redrawing on a classification menu of a UI (user interface), the information architecture optimization module is used for optimizing module levels of a core function, and the focus display module is electrically connected with the search profit calculation module and the classification redrawing module.
According to the technical scheme, in the sensor combination module, the deep-sense camera module and the body movement recording sensor module form a series structure to realize state identification and recording when a user watches the television, the deep-sense camera module has a watching sensing function, the body movement recording sensor module has a high-sensitivity multi-axis sensing function, and the specific linkage method of the state identification and recording when the user watches the television by the series structure formed by the deep-sense camera module and the body movement recording sensor module is as follows:
step S1: detecting the sight angle of a user and judging the watching state;
step S2: sending a starting electric signal to the body movement recording sensor module according to the monitoring result;
and step S3: and judging the user behavior according to the data recorded by the body motion recording sensor.
According to the above technical solution, in the step S3, the user behavior and the judgment basis thereof specifically include the following classifications:
classification A: the deep-sense camera detects that the user is in a watching state;
and B, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has activity feedback within the time less than t;
and C, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has activity feedback when the time is more than T and less than T;
and D, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has no activity feedback after the time is more than T;
the above categories represent the following possible activities, respectively:
activity A: the user is in a television watching state;
and (4) activity B: the user is in other activities than watching television;
and C, activity C: the user enters a light sleep state;
and D, activity: the user enters a deep sleep state;
the time T and the time T respectively represent a light sleep time threshold and a deep sleep time threshold of the user, the unit is minutes, and the time T and the time T are obtained by combining the age of the user and the historical sleep time and integrating big data.
According to the technical scheme, in the user model building module, the method for building the model of the user comprises the following steps:
the method comprises the following steps: manually inputting user characteristics, wherein the user characteristics comprise age, viewing interest type and average viewing duration;
step two: integrating data recorded by the sensor combination module;
step three: classifying the identity of the user in combination with the data;
step four: analyzing the user requirements according to the classification result and the recorded data;
step five: and carrying out interactive optimization on the analysis result of the user requirement.
According to the above technical solution, in the fifth step, the method for interactively optimizing the analysis result of the user requirement further comprises the following steps:
optimization step 1: calculating a search gain R;
and (2) optimization step: recording the time T of each user switching classification plate in combination with the searching behavior of the user B And searching time-consuming T in current plate w
And (3) optimization step: determining effective value G of search target of user according to historical search period and big data i
And 4, optimization step: counting the time T spent by the user under the condition of no target search J
According to the above technical solution, in the optimization step 1, a calculation formula of the search gain R is as follows:
Figure BDA0003756196000000041
wherein k is time conversion coefficient with value range of (0, 1), T B 、T w 、T J In minutes.
According to the technical scheme, the information architecture optimization module comprises content classification optimization, display focus optimization and interaction process optimization. .
Compared with the prior art, the invention has the following beneficial effects: according to the television watching system, the deep-sense camera module and the body movement recording sensor module are arranged, the deep-sense camera can identify the face of a user, the functions of encryption and identification are achieved, meanwhile, the watching duration can be recorded when the user watches a behavior, whether the user watches the television is judged by combining the body movement recording sensor, the data of the user belongs to privacy data, and the risk of privacy disclosure is reduced by storing in a block chain mode.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic diagram of the system module composition of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides a technical solution: the utility model provides an intelligent interactive system based on multisensor, including sensor assembling module, user model construction module, mutual optimization module, sensor assembling module is used for gathering user's optimizable data according to a plurality of sensors, user model construction module is used for combining the use scene model of gathering data construction user, mutual optimization module is used for carrying out visual optimization to user's interactive scene and interactive mode, user model construction module includes manual input module, sensor data integration module, identity classification module, demand analysis module, manual input module is used for typing the identity information of individual user in the system, the habit of using, sensor data integration module is used for carrying out analysis integration to sensor data collection, identity classification module is used for carrying out identity classification according to sensor data collection and user's habit, demand analysis module is used for analyzing user's interactive demand. In order to meet the use of various users, a manual input module is provided for user information input, meanwhile, the users are classified by combining data collected by a sensor, the interaction requirements and the interaction habits of the users are further analyzed, and a complete user model is constructed.
The sensor combination module comprises a deep sensing camera module, a body movement recording sensor module, a behavior recording module and a data uploading module, wherein the deep sensing camera module is used for verifying the identity of a user and recording the watching behavior, the body movement recording sensor module is used for identifying and recording the body movement condition of the user, the behavior recording module is used for recording the behavior characteristics of the user when watching television, and the data uploading module is used for transmitting data to a block chain for storage. The camera of feeling deeply can carry out people's face identification to the user, plays encryption and identification's effect, can produce when gazing the action at the user simultaneously, and the record is watched for a long time, combines body movement record sensor to judge whether watching TV the user, and user's data belongs to privacy data, carries out block chain storage and reduces the privacy and reveal the risk.
The interactive optimization module comprises a focusing display module, a search income calculation module, a classification redrawing module and an information architecture optimization module, wherein the focusing display module is used for changing focusing display layout for a television display drive board, the search income calculation module is used for calculating the income of a user in the search process, the classification redrawing module is used for performing targeted redrawing on a classification menu of a UI (user interface), the information architecture optimization module is used for optimizing the module level of a core function, and the focusing display module is electrically connected with the search income calculation module and the classification redrawing module. The method comprises the steps of carrying out focusing optimization on a television display interface according to the requirements of users, ensuring that each user can more efficiently inquire the required interface, calculating the search yield each time, carrying out comprehensive optimization according to the calculation result, and meanwhile, carrying out navigation menu classification redrawing aiming at different users, so that the search efficiency is improved.
In the sensor combination module, the series structure formed by the deep-sensing camera module and the body movement recording sensor module realizes the state recognition and recording when the user watches the television, the deep-sensing camera module has a watching sensing function, the body movement recording sensor module has a high-sensitivity multi-axis sensing function, and the concrete linkage method of the state recognition and recording when the user watches the television by the series structure formed by the deep-sensing camera module and the body movement recording sensor module is as follows:
step S1: detecting the sight angle of a user and judging the watching state; when a user watches a television, the user can watch a screen and simultaneously can be detected by a watching sensing function in the deep-sensing camera to generate a starting signal, and a stopping signal is generated when the user stops watching, wherein the starting signal is specifically represented as a binary state 1, and the stopping signal is specifically represented as a binary state 0;
step S2: sending a starting electric signal to the body movement recording sensor module according to the monitoring result; the body movement recording sensor module is in a sleep state for a long time, the receiving state can be opened only after the face recognition of the user is detected to enter the television system, and meanwhile, the sleep is released after a gazing stopping signal sent by the gazing sensing module is received, so that the power consumption is reduced;
and step S3: and judging the user behavior according to the data recorded by the body movement recording sensor.
In step S3, the user behavior and the judgment criteria thereof specifically include the following classifications:
classification A: the deep-sense camera detects that the user is in a watching state;
and B, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has activity feedback within the time less than t;
and C, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has activity feedback when the time is more than T and less than T;
and D, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has no activity feedback after the time is longer than T;
the above categories represent the following possible activities, respectively:
activity A: the user is in a television watching state;
and (B) activity: the user is in other activities than watching television;
and C, activity C: the user enters a light sleep state;
and (3) activity D: the user enters a deep sleep state;
wherein, the time T and T respectively represent the light sleep time threshold and the deep sleep time threshold of the user, the unit is minutes, and the time T and T are obtained by combining the age of the user and the historical sleep time and integrating the big data.
In the user model building module, the method for building the model of the user comprises the following steps:
the method comprises the following steps: manually inputting user characteristics, wherein the user characteristics comprise age, viewing interest type and average viewing duration;
step two: integrating data recorded by the sensor combination module;
step three: classifying the identity of the user in combination with the data;
step four: analyzing the user requirements according to the classification result and the recorded data;
step five: and carrying out interactive optimization on the analysis result of the user requirement.
In the fifth step, the method for interactively optimizing the analysis result of the user requirement further comprises the following steps:
optimization step 1: calculating a search gain R; under the situation of using the intelligent television, a user has both a non-target searching behavior and a target searching behavior, information browsed and recognized by the user can be income type information in the non-target searching behavior, the target of the user is already clear in the target searching behavior, and the measuring standard of searching efficiency is time consumed by searching;
and (2) optimization: recording the time T of each time the user switches the classification plate in combination with the search behavior of the user B And searching for time-consuming T in current plate w (ii) a The classification plate block comprises a plurality of sub-classification plate blocks, and the total switching time is T B
And (3) optimizing: determining effective value G of search target of user according to historical search period and big data i (ii) a In the internet television scene, the search result is relatively complex, the search income of the user exceeds the target income no matter how the search result is after the search is finished, but the numerical value quantification is lacked, the search income of the user can be quantified by combining the historical search period and the search target effective value determined by big data, and the calculation characteristic is more obvious;
and 4, optimization step: counting the time T spent by the user under the condition of no target search J
In the optimization step 1, the calculation formula of the search gain R is as follows:
Figure BDA0003756196000000081
wherein k is time conversion coefficient with value range of (0, 1), T B 、T w 、T J The unit of (c) is minutes. Search behaviors generated by a user during interaction are difficult to digitize, search time of the user is combined with a target effective value to calculate search gain, search benefits of the user are more remarkably expressed, and data reference is provided for interface optimization and classification design of an interaction system, and the method specifically comprises the following steps: and sequencing the user search gains R after face verification, and selecting the display logic of the interface according to the sequencing result.
The information architecture optimization module comprises content classification optimization, display focus optimization and interaction process optimization. The preference degrees of users of different age groups to program types are different, the preference of the users is judged by combining manually input information and user usual watching record and searching time big data, the program with the highest interest of the users is displayed in a sub-screen in a focusing mode, and voice and gesture optimization is carried out on an interactive process, so that the learning cost of the users is reduced.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. The utility model provides an intelligence interactive system based on multisensor, includes sensor composite module, user model construction module, interaction optimization module, its characterized in that: the sensor combination module is used for collecting user's optimizable data according to a plurality of sensors, the user model construction module is used for combining the collected data to construct a user scene model of the user, the interaction optimization module is used for carrying out visual optimization on the interaction scene and the interaction mode of the user, the user model construction module comprises a manual input module, a sensor data integration module, an identity classification module and a demand analysis module, the manual input module is used for inputting identity information and use habits of a single user into a system, the sensor data integration module is used for analyzing and integrating sensor collected data, the identity classification module is used for carrying out identity classification according to the sensor collected data and the user habits, and the demand analysis module is used for analyzing the interaction demands of the user.
2. The intelligent interactive system based on multiple sensors, according to claim 1, is characterized in that: the sensor combination module includes that the camera module is felt to the depth, body movement record sensor module, action record module, data upload the module, the camera module is felt to the depth and is used for verifying user's identity and gazing the action record, body movement record sensor module is used for discerning and the record the body movement condition of user, action record module is used for recording the action characteristic when the user watches the TV, the module is uploaded to the data and is used for preserving in reaching the block chain with the data.
3. The intelligent interactive system based on multiple sensors, according to claim 2, characterized in that: the interactive optimization module comprises a focusing display module, a search income calculation module, a classification redrawing module and an information architecture optimization module, wherein the focusing display module is used for changing a focusing display layout of a television display drive board in a butt joint mode, the search income calculation module is used for calculating the income of a user in a search process, the classification redrawing module is used for performing targeted redrawing on a classification menu of a UI (user interface), the information architecture optimization module is used for optimizing a module level of a core function, and the focusing display module is electrically connected with the search income calculation module and the classification redrawing module.
4. The intelligent interactive system based on multiple sensors, according to claim 3, characterized in that: in the sensor combination module, the series structure is composed of the deep-sense camera module and the body movement recording sensor module to realize state recognition and recording when watching TV for a user, the deep-sense camera module has a watching sensing function, the body movement recording sensor module has a high-sensitivity multi-axis sensing function, and the concrete linkage method of the state recognition and recording when watching TV for the user by the series structure composed of the deep-sense camera module and the body movement recording sensor module is as follows:
step S1: detecting the sight angle of a user and judging the watching state;
step S2: sending a starting electric signal to the body movement recording sensor module according to the monitoring result;
and step S3: and judging the user behavior according to the data recorded by the body motion recording sensor.
5. The intelligent interactive system based on multiple sensors, according to claim 4, is characterized in that: in step S3, the user behavior and the judgment criterion thereof specifically include the following classifications:
classification a: the deep-sense camera detects that the user is in a watching state;
and B, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has activity feedback within the time less than t;
and C, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has activity feedback within a time range of being less than T after the time is more than T;
and D, classification: the deep-sense camera detects that the user is not in a watching state, and the body movement recording sensor detects that the user has no activity feedback after the time is more than T;
the above categories represent the following possible activities, respectively:
activity A: the user is in a television watching state;
and (4) activity B: the user is in other activities than watching television;
and C, activity C: the user enters a light sleep state;
and D, activity: the user enters a deep sleep state;
wherein, the time T and T respectively represent the light sleep time threshold and the deep sleep time threshold of the user, the unit is minutes, and the time T and T are obtained by combining the age of the user and the historical sleep time and integrating the big data.
6. The intelligent interactive system based on multiple sensors, according to claim 5, characterized in that: in the user model building module, the method for building the model of the user comprises the following steps:
the method comprises the following steps: manually inputting user characteristics, wherein the user characteristics comprise age, viewing interest type and average viewing duration;
step two: integrating data recorded by the sensor combination module;
step three: classifying the identity of the user in combination with the data;
step four: analyzing the user requirements according to the classification result and the recorded data;
step five: and performing interactive optimization on the user requirement analysis result.
7. The intelligent interactive system based on multiple sensors, according to claim 6, characterized in that: in the fifth step, the method for interactively optimizing the analysis result of the user requirement further comprises the following steps:
optimization step 1: calculating a search gain R;
and (2) optimization step: recording the time T of each user switching classification plate in combination with the searching behavior of the user B And searching for time-consuming T in current plate w
And (3) optimization step: combining big data determination based on historical search periodUser's search target valid value G i
And 4, optimization step: counting the time T spent by the user under the condition of no target search J
8. The intelligent interactive system based on multiple sensors, according to claim 7, characterized in that: in the optimization step 1, a calculation formula of the search gain R is as follows:
Figure FDA0003756195990000031
wherein k is a time conversion coefficient with a value range of (0, 1) and T B 、T w 、T J The unit of (c) is minutes.
9. The intelligent interactive system based on multiple sensors, according to claim 8, characterized in that: the information architecture optimization module comprises content classification optimization, display focus optimization and interaction process optimization.
CN202210861778.6A 2022-07-20 2022-07-20 Intelligent interaction system based on multiple sensors Active CN115407867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210861778.6A CN115407867B (en) 2022-07-20 2022-07-20 Intelligent interaction system based on multiple sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210861778.6A CN115407867B (en) 2022-07-20 2022-07-20 Intelligent interaction system based on multiple sensors

Publications (2)

Publication Number Publication Date
CN115407867A true CN115407867A (en) 2022-11-29
CN115407867B CN115407867B (en) 2023-10-24

Family

ID=84158047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210861778.6A Active CN115407867B (en) 2022-07-20 2022-07-20 Intelligent interaction system based on multiple sensors

Country Status (1)

Country Link
CN (1) CN115407867B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796734A (en) * 2015-03-20 2015-07-22 四川长虹电器股份有限公司 Real-time interactive smart television program combined recommendation system and method
US20160209919A1 (en) * 2013-08-29 2016-07-21 Sony Corporation Information processing device and information processing method
CN109068149A (en) * 2018-09-14 2018-12-21 深圳Tcl新技术有限公司 Program commending method, terminal and computer readable storage medium
CN111629254A (en) * 2020-05-18 2020-09-04 南京莱科智能工程研究院有限公司 Scene-based intelligent television program recommending control system
CN114501144A (en) * 2022-01-13 2022-05-13 深圳灏鹏科技有限公司 Image-based television control method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160209919A1 (en) * 2013-08-29 2016-07-21 Sony Corporation Information processing device and information processing method
CN104796734A (en) * 2015-03-20 2015-07-22 四川长虹电器股份有限公司 Real-time interactive smart television program combined recommendation system and method
CN109068149A (en) * 2018-09-14 2018-12-21 深圳Tcl新技术有限公司 Program commending method, terminal and computer readable storage medium
CN111629254A (en) * 2020-05-18 2020-09-04 南京莱科智能工程研究院有限公司 Scene-based intelligent television program recommending control system
CN114501144A (en) * 2022-01-13 2022-05-13 深圳灏鹏科技有限公司 Image-based television control method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115407867B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
US10084964B1 (en) Providing subject information regarding upcoming images on a display
JP5715390B2 (en) Viewing terminal device, viewing statistics device, viewing statistics processing system, and viewing statistics processing method
CN101925916B (en) Method and system for controlling electronic device based on media preferences
US11409817B2 (en) Display apparatus and method of controlling the same
US20090271826A1 (en) Method of recommending broadcasting contents and recommending apparatus therefor
CN109874053A (en) The short video recommendation method with user's dynamic interest is understood based on video content
KR20150065686A (en) Context-based content recommendations
CN101877060A (en) Messaging device and method and program
CN101551825A (en) Personalized film recommendation system and method based on attribute description
CN102870425A (en) Primary screen view control through kinetic ui framework
CN103763585A (en) User characteristic information obtaining method and device and terminal device
CN103942243A (en) Display apparatus and method for providing customer-built information using the same
CN101984657A (en) Digital television terminal as well as method and device for generating context menu at application interface thereof
CN105825098B (en) Unlocking screen method, image-pickup method and the device of a kind of electric terminal
CN102467815B (en) Multifunctional remote controlller, remote control thereof and energy consumption monitoring method
CN109783656A (en) Recommended method, system and the server and storage medium of audio, video data
CN103686233A (en) Recording and playback method of digital TV programs, digital TV recording and playback device and server
CN103856813A (en) Television program switching system and method
CN115407867A (en) Intelligent interaction system based on multiple sensors
JP6433615B1 (en) Viewing record analysis apparatus, viewing record analysis method, and viewing record analysis program
US9727312B1 (en) Providing subject information regarding upcoming images on a display
US10706601B2 (en) Interface for receiving subject affinity information
CN110545455A (en) Smart television recommendation method based on fingerprint identification
CN104281516A (en) Methods and apparatus to characterize households with media meter data
CN111314715A (en) Chinese traditional culture education information processing system based on internet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230926

Address after: Room 401, Unit 1, Building 2, Anyuan Community, Jianping Town, Langxi County, Xuancheng City, Anhui Province, 242000

Applicant after: Xia Qianming

Address before: No. 427, Xiexin Road, Taicang City, Suzhou City, Jiangsu Province, 215000

Applicant before: Qi Hongxin

GR01 Patent grant
GR01 Patent grant