WO2018171196A1 - 一种控制方法、终端及*** - Google Patents

一种控制方法、终端及*** Download PDF

Info

Publication number
WO2018171196A1
WO2018171196A1 PCT/CN2017/108458 CN2017108458W WO2018171196A1 WO 2018171196 A1 WO2018171196 A1 WO 2018171196A1 CN 2017108458 W CN2017108458 W CN 2017108458W WO 2018171196 A1 WO2018171196 A1 WO 2018171196A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
terminal
feature data
user
information
Prior art date
Application number
PCT/CN2017/108458
Other languages
English (en)
French (fr)
Inventor
王剑锋
陈浩
周胜丰
王卿
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US16/496,265 priority Critical patent/US11562271B2/en
Priority to CN201780088792.4A priority patent/CN110446996A/zh
Publication of WO2018171196A1 publication Critical patent/WO2018171196A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/573Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/73Authorising game programs or game devices, e.g. checking authenticity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/812Ball games, e.g. soccer or baseball
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/045Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1012Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals involving biosensors worn by the player, e.g. for measuring heart beat, limb activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • the present application relates to the field of data processing, and in particular, to a control method, a terminal, and a system.
  • the data source in the interactive system has a large limitation, resulting in poor authenticity of the interactive system in the virtual scene.
  • the VR glasses only the image of the eye is collected by the camera to realize the line of sight tracking and the intelligent interaction, so that the estimation of the line of sight is relatively limited, resulting in poor experience of the VR glasses.
  • the present application provides a control method, a terminal, and a system, and aims to solve the technical problem that the accuracy of the interactive system is poor due to the limitation of the data source.
  • a first aspect of the present application provides a control method, applicable to a terminal, including the steps of: acquiring feature data by using at least one sensor, generating an action instruction according to the feature data and a decision mechanism of the terminal, and then executing the The action instructions are described to implement interactive control. It can be seen that the present application collects various characteristic data through multiple sensors, performs data analysis, and then generates corresponding action instructions based on the corresponding decision mechanism to implement interactive control, which is accurate compared with the prior art due to data source limitation. In the case of poor performance, this application improves the defect by adding data sources, makes decision decisions on interactive control from various aspects, and significantly improves the accuracy of interactive control.
  • the feature data includes at least biometric data and environmental feature data of the user; and generating an action instruction according to the feature data and the decision mechanism of the terminal, including: according to the biometric At least one of the data, such as brain wave data, biological indicator data and muscle motion data, and at least one of environmental characteristic data, such as temperature data, humidity data, noise data, light intensity data, and air quality data, control office Said to enter the target work mode.
  • the feature data includes only the environment feature data; the generating the action instruction according to the feature data and the decision mechanism of the terminal, including: controlling the terminal to enter the target work mode based on at least a part of the environment feature data.
  • the feature data includes at least the biometric data of the user and the environmental feature data; and the generating the action instruction according to the feature data and the decision mechanism of the terminal, including: at least: And controlling the terminal to display the current motion picture of the user according to the brain wave data and the muscle motion data in the biometric data and the temperature data, the humidity data, the image data, and the sound data in the environmental feature data.
  • the feature data only includes environmental feature data; the determining according to the feature data and the terminal And generating an action instruction, comprising: controlling the terminal to display the current motion picture based on at least a part of the environment feature data.
  • the feature data includes at least the biometric data of the user and the environmental feature data; and the generating the action instruction according to the feature data and the decision mechanism of the terminal, including: at least Controlling the terminal to prompt the road surface according to the biometric indicator data and the brain wave data of the user in the biometric data, and the image data of the road surface in the environmental feature data, the speed data of the vehicle, the temperature data, the position data, and the humidity data.
  • Driving information Or the feature data includes only the environment feature data; the generating the action instruction according to the feature data and the decision mechanism of the terminal, comprising: controlling the terminal to prompt the road driving information based on at least a part of the environment feature data.
  • a fourth implementation the generating, according to the feature data and the decision mechanism, generating an action instruction, including: at least according to temperature data, humidity data, image data, image depth data, and direction data in the feature data And the location data, the environment state information is obtained, the environment state information includes: the object element information and the comfort information in the environment; and the terminal prompts the environment related information according to the object element information and the comfort information in the environment.
  • a fifth implementation of the first aspect the generating an action instruction according to the feature data and the decision mechanism, comprising: at least according to muscle motion state data, brain wave data, and image data of a face in the feature data, Obtaining bio-state information, the bio-state information including at least: bio-motion state information, bio-emotional state information; controlling the terminal to display bio-related information according to the bio-motion state information and bio-emotional state information.
  • the biometric information acquired by the sensor may be used to identify the user, for example, based on one or any combination of fingerprint data, iris data, or facial data. Identification. After the identification is passed, there are at least two operations. First, after the identity authentication is passed, the above-mentioned action of generating the instruction is executed; the second is the action of executing the above-mentioned execution instruction after the identity authentication is passed. Both can be used, and this program is not limited.
  • the action instruction at least includes one or more of controlling a function of the terminal to issue a voice, controlling the terminal to perform display, and controlling the terminal to trigger an application.
  • generating an action instruction according to the feature data and the decision mechanism of the terminal including: analyzing the feature data to obtain an output result; and determining a decision mechanism corresponding to the feature data; And determining, according to the decision mechanism, an action instruction corresponding to the output result.
  • the analyzing the feature data to obtain an output result comprises: identifying and classifying the data source of the feature data; and processing the classified feature data by using a corresponding data processing algorithm to obtain an output result.
  • the biometric data may be used to perform feature recognition on the biometric data to obtain an output result, and the output result includes at least: a fingerprint recognition result, an iris recognition result, a face recognition result, and a biological motion state recognition result.
  • the physical characteristic data processing algorithm is used to identify the elements, and the output result is obtained, and the output result includes at least: an object type, a size, an orientation, a material, a state, an ambient temperature, and One or any combination of environmental humidity recognition results.
  • the solution may also perform data learning and data correction on the data of the output result. Further, the feature data and the learned and corrected output result are stored; after the execution of the action instruction, the execution result and the stored data generated after the execution of the action instruction are further included, Generate feedback information to improve the accuracy of the next output.
  • an embodiment of the present application provides a terminal, including: a processor, a memory, where the memory is used to store a computer execution instruction, the processor is connected to the memory through the bus, and when the terminal is running, the processor executes The computer stored in the memory executes instructions to cause the terminal to perform any of the application switching methods described above.
  • an embodiment of the present application provides a terminal, including: at least one sensor, configured to acquire feature data, and a processor, configured to generate an action instruction according to the feature data and a decision mechanism of the terminal; The action instruction.
  • an embodiment of the present application provides an apparatus, including: an acquiring unit, configured to acquire feature data by using at least one sensor, where the feature data is data collected by the terminal by using the at least one sensor; And for generating an action instruction according to the feature data and a decision mechanism of the terminal; and an execution unit, configured to execute the action instruction.
  • an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium stores an instruction, when the instruction is run on any one of the foregoing terminals, causing the terminal to perform any one of the application switching method.
  • the embodiment of the present application provides a computer program product, including instructions, when the terminal runs on any of the foregoing terminals, causing the terminal to perform any of the foregoing application switching methods.
  • an embodiment of the present application provides a control system, including: a sensor that collects at least one feature data and a control terminal, where the control terminal acquires feature data by using the at least one sensor, according to the feature data and The decision mechanism of the control terminal generates an action instruction, and then executes the action instruction to implement interaction control.
  • the present application performs data analysis by collecting a plurality of sensors to collect feature data of various aspects, and then generates corresponding action instructions based on the corresponding decision mechanism to implement interactive control, which is compared with the prior art due to data source limitation.
  • the present application improves the defect by adding a data source, and makes decision decisions on interactive control from various aspects, thereby significantly improving the accuracy of the interactive control.
  • the names of the foregoing terminals are not limited to the devices themselves, and in actual implementation, the devices may appear under other names. As long as the functions of the respective devices are similar to the embodiments of the present application, they are within the scope of the claims and their equivalents.
  • FIG. 1 is a structural diagram of a terminal in an embodiment of the present application
  • FIG. 3 is a schematic diagram of an application example of an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an application example of an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an application example of an embodiment of the present application.
  • FIG. 7 is a diagram showing an application example of an embodiment of the present application.
  • FIG. 8 is a diagram showing an application example of an embodiment of the present application.
  • FIG. 9 is a diagram showing an application example of an embodiment of the present application.
  • FIG. 10 is a diagram showing an application example of an embodiment of the present application.
  • FIG. 11 is a diagram showing an application example of an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a control terminal according to an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a control system according to an embodiment of the present application.
  • FIG. 14 is a diagram showing another application example of an embodiment of the present application.
  • FIG. 15 is a schematic diagram of an application scenario according to an embodiment of the present application.
  • This application is mainly for the next generation of intelligent terminals.
  • the mobile machine learning method is used as an auxiliary to carry out complex single-point multi-point decision-making interactive system.
  • the system is mainly used to enhance the experience of immersive virtual interaction, to solve the authenticity of virtual scene interaction and feedback, and to enhance the interaction strategy in reality, to give better arrangements and suggestions, and to provide better users for improving work and life.
  • Auxiliary Before the solution of the present application is specifically introduced, for the sake of understanding, two scenarios applicable to the solution are first described.
  • the simulated tennis motion system in this embodiment is implemented by an interactive control terminal such as a VR game terminal or the like.
  • the interactive control terminal includes corresponding modules or components such that the interactive control terminal performs the steps as shown in FIG. 14: user identity establishment and intelligent access, establishing a real stadium environment model, establishing a real hitting model, and simulating immersion. Sports, smart reminders and advice. details as follows:
  • the interactive control terminal is provided with a built-in memory or an external memory, and the user's identity authentication information is pre-stored.
  • the interactive control terminal fingerprint sensor collects fingerprint data (the fingerprint sensor may be an independent entity connected to the exchange control terminal via WiFi or Bluetooth), or collects iris data through an iris sensor, and sends the data to a processor in the terminal.
  • the processor compares the existing database results based on the data.
  • the corresponding user controls are entered, such as generating and executing an action instruction to open the court interface (correspondingly, if there is consistent data, an action instruction to open the payment page is generated and executed, and the payment page is entered, or an action instruction for debiting is generated and executed, and the payment function is completed. If it does not exist, the face is recognized by the image data of the face collected by the single camera or the multi-camera, thereby assisting in identifying the user's identity. If the corresponding user still does not exist by means of the auxiliary identification, the action instruction for creating a new user space is generated and executed, and the operation of creating a new user account is performed, and the corresponding function is implemented after the user logs in to the account.
  • the interactive control terminal After the user logs in, the interactive control terminal establishes a real course environment model. For example, let users come to their favorite stadiums, play a real tennis game through chat software or online with their old friends.
  • the interactive control terminal collects image data, image depth data, position data, direction data, and the like through various sensors, and is analyzed by the processor to perform image recognition on the real ground of the stadium, by comparing each pre-stored in the ground texture database of the stadium.
  • the stadium texture data determine the stadium texture of the real stadium, determine the ground elasticity coefficient of the stadium, and build a tennis model based on this.
  • the interactive control terminal in this embodiment performs image recognition on the entire stadium and combines the image depth data to determine the size of the court, the position of the sideline, the height and position of the net, the layout and location of the stadium, etc., and then integrates the above-mentioned determination content to establish a real 3D virtual model of the stadium.
  • the interactive control terminal includes the following processes in the process of establishing a real hitting model:
  • the initial speed is combined with the tennis model to determine the trajectory of the tennis ball.
  • the sound feedback when hitting the ball, and the video simulation feedback is performed throughout.
  • the muscle pressure feedback strength is determined according to the tennis speed.
  • the solution effectively improves the user's interaction when playing the game, so that the user can better immerse in the game and improve the game experience.
  • the application of the solution in the driving scene for example, the application of the application in the smart itinerary is illustrated, taking the trip simulation of driving to the meeting as an example:
  • the trip simulation in this embodiment is implemented by an interactive control terminal, such as an intelligent in-vehicle device, and the interactive control terminal may include: user identity establishment and intelligent admission, user physiological state determination, user mental state determination, and intelligence.
  • user identity establishment and intelligent admission user physiological state determination
  • user mental state determination user mental state determination
  • intelligence intelligence.
  • functions such as environmental judgment and intelligent arrangement, smart reminders and suggestions:
  • the interactive control terminal collects fingerprint data or iris data through sensors, and sends the data to the processor in the terminal to compare the data in the existing database results. If these exist in the library, The data with the same data enters the corresponding user control, such as the instruction to produce and execute the driving simulation page; if it does not exist, the processor combines the image data of the face to assist the recognition, and if there is still no corresponding user, the Execute the action instructions for creating a new user space to implement the corresponding functions.
  • the interactive control terminal collects relevant characteristic data of physiological signs (including user magnetic field, blood various indexes, heart rate, body temperature, etc.) through the sensor, and sends the processor to the terminal, and the processor performs data on the data based on the corresponding biological hospital analysis algorithm. Processing, combined with the machine to learn the user's physical history, to determine the user's physiological state (including health status, fatigue, etc.), and determine the impact on the user's itinerary and corresponding recommendations (such as fatigue is very high, not suitable drive).
  • relevant characteristic data of physiological signs including user magnetic field, blood various indexes, heart rate, body temperature, etc.
  • Processing combined with the machine to learn the user's physical history, to determine the user's physiological state (including health status, fatigue, etc.), and determine the impact on the user's itinerary and corresponding recommendations (such as fatigue is very high, not suitable drive).
  • the interactive control terminal collects the image data of the user's face and the brain wave data, and the processor uses the image recognition technology to perform micro-expression recognition on the image data, and performs excitability recognition analysis on the brain wave data, thereby based on the micro-expression And the degree of excitement combined to determine the corresponding mental indicators (such as mental state hyperactivity, should be cautious driving).
  • the interactive control terminal collects the image data of the road surface during driving and the speed data of the vehicle, the processor analyzes the image data, uses the image recognition technology to identify the road environment, combines the speed data and the trip data, determines the driving experience and gives corresponding Suggestions (such as good road conditions, tight travel time, can speed up properly), such as generating and executing action instructions to display suggested information.
  • Suggestions such as good road conditions, tight travel time, can speed up properly
  • the interactive control terminal collects physical data such as temperature data and humidity data, and the processor analyzes the data to determine the driving experience corresponding to the environment (such as a bad environment, which is not suitable for long driving).
  • the processor performs comprehensive judgment, and intelligently advises the user to drive safety matters, driving speed, midway rest time, and risk warning that may miss the trip.
  • the processor performs comprehensive judgment to improve the driving environment and enhance the user experience (such as selecting some light music, adjusting the temperature of the air conditioner, etc.).
  • the scheme effectively improves the safety of driving, and on the basis of ensuring safe driving of the user, the driving comfort of the user is effectively improved.
  • a control method provided by an embodiment of the present application can be applied to a mobile phone, a terminal, an augmented reality (AR), a virtual reality (VR) device, a tablet computer, a notebook computer, a super mobile personal computer (UMPC), a netbook, and a personal digital assistant.
  • AR augmented reality
  • VR virtual reality
  • UMPC super mobile personal computer
  • PDA netbook
  • PDA personal digital assistant
  • the terminal in the embodiment of the present application may be the mobile phone 100.
  • the embodiment will be specifically described below by taking the mobile phone 100 as an example. It should be understood that the illustrated mobile phone 100 is only one example of the above terminal, and the mobile phone 100 may have more or fewer components than those shown in the figure, two or more components may be combined, or Has a different component configuration.
  • the mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, one or more sensors 106, a Wi-Fi device 107, a positioning device 108, Components such as audio circuit 109, peripheral interface 110, and power system 111. These components can communicate over one or more communication buses or signal lines (not shown in Figure 2). It will be understood by those skilled in the art that the hardware structure shown in FIG. 2 does not constitute a limitation to the mobile phone, and the mobile phone 100 may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • RF radio frequency
  • the processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 by using various interfaces and lines, and executes the mobile phone 100 by running or executing an application stored in the memory 103 and calling data stored in the memory 103.
  • the processor 101 may include one or more processing units; for example, the processor 101 may be a Kirin 960 chip manufactured by Huawei Technologies Co., Ltd.
  • the processor 101 may further include a fingerprint verification chip for verifying the collected fingerprint.
  • the processor 101 may further include a graphics processing unit (GPU) 115.
  • the GPU 115 is a microprocessor that performs image computing operations on personal computers, workstations, game consoles, and some mobile devices (such as tablets, smart phones, etc.). It can convert the display information required by the mobile phone 100 and provide a line scan signal to the display 104-2 to control the correct display of the display 104-2.
  • the mobile phone 100 may send a corresponding drawing command to the GPU 115.
  • the drawing command may be “drawing a rectangle having a length and a width of a ⁇ b at the coordinate position (x, y).
  • the GPU 115 can quickly calculate all the pixels of the graphic according to the drawing instruction, and draw corresponding graphics on the specified position on the display 104-2.
  • the GPU 115 may be integrated in the processor 101 in the form of a functional module, or may be disposed in the mobile phone 100 in a separate physical form (for example, a video card).
  • the radio frequency circuit 102 can be used to receive and transmit wireless signals during transmission or reception of information or calls.
  • the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101; in addition, transmit the data related to the uplink to the base station.
  • radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency circuit 102 can also communicate with other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol, including but not limited to global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.
  • the memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103.
  • the memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.).
  • the memory 103 may include high speed random access memory (RAM), and may also include nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
  • the memory 103 can store various operating systems, for example, developed by Apple. Operating system, developed by Google Inc. Operating system, etc.
  • the above memory 103 may be independent and connected to the processor 101 via the above communication bus; the memory 103 may also be integrated with the processor 101.
  • the random access memory of the mobile phone 100 may also be referred to as a memory or a running memory, and each application installed in the mobile phone 100 needs to occupy a certain memory to run an application related program during the running process. Therefore, when the memory is larger, the mobile phone 100 can run more applications at the same time, can run various applications more quickly, and can switch between different applications more quickly.
  • the memory size of the mobile phone 100 is constant, in order to prevent the background application running in the background from occupying too much mobile phone memory, when the mobile phone 100 switches the foreground application to run in the background, the part occupied by the application may be released. Or all the memory, so that the memory occupied by the application after being cut into the background is reduced, thereby increasing the running memory that the mobile phone 100 can actually use, and improving the running speed of each application in the terminal.
  • the touch screen 104 may specifically include a touch panel 104-1 and a display 104-2.
  • the touch panel 104-1 can collect touch events on or near the user of the mobile phone 100 (for example, the user uses any suitable object such as a finger, a stylus, or the like on the touch panel 104-1 or on the touchpad 104.
  • the operation near -1), and the collected touch information is sent to other devices (for example, processor 101).
  • the touch event of the user in the vicinity of the touch panel 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.) And only the user is located near the terminal in order to perform the desired function.
  • the touch panel 104-1 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • a display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100.
  • the display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the touchpad 104-1 can be overlaid on the display 104-2, and when the touchpad 104-1 detects a touch event on or near it, it is transmitted to the processor 101 to determine the type of touch event, and then the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event.
  • the touchpad 104-1 and the display 104-2 are implemented as two separate components to implement the input and output functions of the handset 100, in some embodiments, the touchpad 104- 1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100. It is to be understood that the touch screen 104 is formed by stacking a plurality of layers of materials. In the embodiment of the present application, only the touch panel (layer) and the display screen (layer) are shown, and other layers are not described in the embodiment of the present application. .
  • the touch panel 104-1 may be disposed on the front surface of the mobile phone 100 in the form of a full-board
  • the display screen 104-2 may also be disposed on the front surface of the mobile phone 100 in the form of a full-board, so that the front of the mobile phone can be borderless. Structure.
  • the mobile phone 100 can also have a fingerprint recognition function.
  • the fingerprint reader 112 can be configured on the back of the handset 100 (eg, below the rear camera) or on the front side of the handset 100 (eg, below the touch screen 104).
  • the fingerprint collection device 112 can be configured in the touch screen 104 to implement the fingerprint recognition function, that is, the fingerprint collection device 112 can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100.
  • the fingerprint capture device 112 is disposed in the touch screen 104 and may be part of the touch screen 104 or may be otherwise disposed in the touch screen 104.
  • the main components of the fingerprint collection device 112 in the embodiment of the present application refer to
  • the fingerprint sensor can employ any type of sensing technology including, but not limited to, optical, capacitive, piezoelectric or ultrasonic sensing technology.
  • the mobile phone 100 can also include a Bluetooth device 105 for enabling data exchange between the handset 100 and other short-range terminals (eg, mobile phones, smart watches, etc.).
  • the Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.
  • the handset 100 can also include at least one type of sensor 106, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light, and the proximity sensor may turn off the power of the display when the mobile phone 100 moves to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
  • the Wi-Fi device 107 is configured to provide the mobile phone 100 with network access complying with the Wi-Fi related standard protocol, and the mobile phone 100 can access the Wi-Fi access point through the Wi-Fi device 107, thereby helping the user to send and receive emails, Browsing web pages and accessing streaming media, etc., it provides users with wireless broadband Internet access.
  • the Wi-Fi device 107 can also function as a Wi-Fi wireless access point, and can provide Wi-Fi network access to other terminals.
  • the positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a Global Positioning System (GPS) or a Beidou satellite navigation system, or a Russian GLONASS. After receiving the geographical location transmitted by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends it to the memory 103 for storage.
  • GPS Global Positioning System
  • Beidou satellite navigation system Beidou satellite navigation system
  • Russian GLONASS Russian GLONASS
  • the positioning device 108 may also be a receiver that assists the Global Positioning System (AGPS), which assists the positioning device 108 in performing ranging and positioning services by acting as a secondary server, in which case The secondary location server provides location assistance over a wireless communication network in communication with a location device 108, such as a GPS receiver, of the handset 100.
  • AGPS Global Positioning System
  • the positioning device 108 can also be a Wi-Fi access point based positioning technology. Since each Wi-Fi access point has a globally unique MAC address, the terminal can scan and collect the broadcast signals of the surrounding Wi-Fi access points when Wi-Fi is turned on, so that Wi- can be obtained.
  • the geographic location combined with the strength of the Wi-Fi broadcast signal, calculates the geographic location of the terminal and sends it to the location device 108 of the terminal.
  • the audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100.
  • the audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 converts the collected sound signal into an electrical signal by the audio circuit 109. Convert to audio data after receiving, then input the audio data
  • the RF circuit 102 is sent to, for example, another handset, or the audio data is output to the memory 103 for further processing.
  • the peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.).
  • external input/output devices such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.
  • a universal serial bus (USB) interface is connected to the mouse, and a metal contact on the card slot of the subscriber identification module is connected to a subscriber identity module card (SIM) card provided by the telecommunications carrier.
  • SIM subscriber identity module card
  • Peripheral interface 110 can be used to couple the external input/output peripherals described above to processor 101 and memory 103.
  • the mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components.
  • the battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.
  • the mobile phone 100 may further include a camera (front camera and/or rear camera), a flash, a micro projection device, a near field communication (NFC) device, and the like, and details are not described herein.
  • a camera front camera and/or rear camera
  • a flash a flash
  • micro projection device a micro projection device
  • NFC near field communication
  • FIG. 15 is a schematic diagram of interaction control between a terminal and a user according to the present invention.
  • the terminal and the user perform interactive control, and the terminal collects user input, physiological characteristic parameters, etc., and processes the data to generate a decision, and provides the result of the interactive decision.
  • the terminal is mainly divided into three functional phases when performing interactive control: a data source acquisition phase for obtaining feature data, a data processing for analyzing feature data, and an interactive decision for interactive control according to the processing result.
  • a terminal such as a terminal, an interactive terminal, an interactive control terminal, and a mobile terminal are specifically used, and may specifically be an intelligent terminal such as a VR glasses, an in-vehicle device, or a mobile phone.
  • the interactive control method may include the following steps:
  • Step 201 Obtain at least one feature data by using at least one sensor.
  • Step 201 is a data source collection phase of the interactive control terminal, which may be implemented by multiple hardware sensors.
  • the sensors may be disposed at a fixed position or worn on the user's body to accurately sense and collect accurate feature data. , obtained by the interactive control terminal in the present application.
  • the terminal uses the hardware sensor for data acquisition, it mainly includes the sensors as shown in FIG. 3: a biological type sensor, a physical type sensor, a biofeedback sensor, an image acquisition module, and an audio video input and output device, etc., which are utilized in the present application.
  • a biological type sensor a physical type sensor
  • a biofeedback sensor a biofeedback sensor
  • an image acquisition module a biofeedback sensor
  • an audio video input and output device etc.
  • the biometric sensor may include one or more of the following sensors to obtain biometric data of the corresponding user:
  • the iris sensor can be worn on the user's head or in a fixed position for collecting the iris image of the user, specifically: an iris camera;
  • the fingerprint sensor is configured to collect fingerprint data of the user, and specifically: a fingerprint collector;
  • An olfactory sensor for discriminating the odor taste perceived by the user by providing the user with a sample of the taste data of the plurality of odors;
  • a taste sensor for providing a sample of taste data of a variety of objects such as food, such as "sweet and bitter” Spicy and salty, etc., to discern the taste of the object perceived by the user;
  • Muscle-sensing sensors that can be attached or bound to the user's muscles, such as the biceps, to sense the user's muscle movements, for example, to collect the user's muscle movement data and convert it into electrical signal data. Characterizing the muscle movement state of the user;
  • the brain wave sensor can be applied to the user's head to collect the brain wave data of the user, and output the brain wave data, which can be a brain wave wave chip or the like;
  • the physiological characteristic sensor is used for collecting various biological indicator data of the user, and specifically may be: a blood pressure sensor (bind on the user's arm), a heart rate sensor (set on the user's neck or a finger), a respiratory frequency sensor (can feel the user's breathing) The frequency is converted into a sensor that can be used as an output signal).
  • the physical type sensor may include one or more of the following sensors to obtain environmental characteristic data and motion characteristic data of the user:
  • a depth sensor for acquiring image depth data
  • a temperature sensor for collecting temperature data of the surrounding environment
  • Humidity sensor for collecting humidity data of the surrounding environment
  • Sound sensor for collecting noise data in the surrounding environment
  • a light sensor for collecting light intensity data in a surrounding environment
  • An air quality sensor for collecting air quality data of the surrounding environment, such as the content of PM2.5;
  • the speed sensor is used for collecting various speed data of the target object, such as a user, and may be: a wire speed sensor, an angular velocity sensor, or the like;
  • Position sensors such as Global Positioning System (GPS), BeiDou Navigation Satellite System (BDS), etc., are used to collect position data of targets, specifically: linear displacement sensors, angular displacement sensors, etc. ;
  • GPS Global Positioning System
  • BDS BeiDou Navigation Satellite System
  • Direction sensor for collecting direction data.
  • the biofeedback sensor is mainly used to generate feedback data of the simulated user, such as pressure feedback (the feedback data after the user perceives the pressure), vibration feedback, simulated odor feedback and the like.
  • the image acquisition module can be understood as a camera module, which can be implemented by a single camera or multiple cameras for acquiring image data.
  • Audio video input and output devices are mainly used to collect and display audio and/or video data.
  • the feature data obtained by the above sensor in the present application may include one or more of the following: iris image, fingerprint data, odor taste, object taste, muscle exercise state, brain wave data, blood pressure, heart rate, respiratory rate , image depth data, temperature data, humidity data, speed data, position data, direction data, image data, audio and video data, and the like.
  • terminal and the sensor may be connected by a wireless connection such as WiFi or Bluetooth, or may be connected by a wired connection such as a serial data line.
  • Step 202 Analyze the feature data to obtain an output result capable of characterizing the interaction state.
  • the input result is a result of characterizing the current interaction state on the basis of the terminal analyzing the feature data, for example, data of a game interaction performed between the user and the terminal mentioned in the above tennis example, including the muscle exercise state of the user. , the user's brain wave activity status, the user's various biological indicators status such as heart rate blood pressure breathing Wait.
  • the input result is information capable of characterizing the current interaction process or the interaction conclusion based on the terminal analyzing the feature data, such as the interaction data mentioned in the driving example above, including the position state, the moving direction, and the speed state of the target in the environment. , temperature and humidity status, etc.
  • the data source identification and processing, the data learning, and the data modification may be included, and the feature data obtained before is preprocessed and corrected, which is an interactive decision stage. Provide input basis.
  • the present application can perform data source identification, data filtering, data normalization and the like on the feature data through biotechnology identification, biomedical analysis and image processing algorithms to obtain an output result.
  • the output result is obtained by performing color recognition, eye recognition and tracking, motion recognition, posture recognition, expression micro-expression recognition, and target tracking on the feature data.
  • the present application can learn and memorize the real scene through the theory of machine learning, and provide auxiliary information for subsequent data correction and other stages.
  • the present application corrects the output result obtained by the data source identification and processing stage according to the auxiliary information outputted in the data learning stage to improve the accuracy of the output result.
  • the output of the 3D scene simulation can be intelligently corrected to improve the accuracy of the subsequently established 3D scene.
  • the present application may first identify and classify the feature data by using the data source, and then use different data processing algorithms to perform data processing on different feature data.
  • the feature data of different data types are output to different data processing algorithms, such as biometric recognition algorithms, biomedical analysis algorithms, and image processing algorithms (color recognition, eye recognition and tracking, and motion). Recognition, body recognition, expression micro-expression recognition, target tracking, etc.);
  • the auxiliary information obtained by machine learning is used to correct the output result, and the corrected result is output as an input of the subsequent interactive decision stage;
  • the data output by the data correction can also provide a further data foundation or material for the data learning phase, so that data learning can be further improved.
  • the method may further include: data storage, data modeling, and decision feedback.
  • the data storage refers to storing various types of feature data collected by the sensor, storing the results obtained by the data learning and various historical information, etc., thereby providing feedback for subsequent decision mechanisms. data support;
  • Data modeling and decision feedback refer to the reconstruction and enhancement of 3D scenes based on subsequent decision mechanisms and data corrected data, so as to intelligently simulate various interactive elements in various scenarios and obtain data based on data learning and data storage.
  • the data, the corresponding feedback output of the decision-making result generated by the decision-making mechanism, that is, the feedback information is provided to the user based on the decision result and the stored data.
  • the feedback when the feedback output is performed, the feedback may be output to the feedback sensor, where the feedback sensor is as the various sensors described above.
  • Step 203 Determine a decision mechanism corresponding to the feature data.
  • the decision mechanism in this application can be divided into: single-point decision-making mechanism and multi-point decision-making mechanism.
  • the single-point decision-making mechanism can be: access decision-making mechanism, through collecting user characteristics, environment-aware decision-making mechanism, human creatures, actions , muscle response, biometric decision-making mechanism, etc.
  • the access decision mechanism refers to the access mechanism for identifying or verifying the identity of the user.
  • the access decision mechanism is the basis of other single-point decision-making mechanisms and multi-point decision-making mechanisms, such as basic decision-making for interactive control judgments such as user identification, such as fingerprint recognition algorithm, iris recognition algorithm and face recognition algorithm.
  • the interaction control decision mechanism refers to the decision mechanism of interactive control decision based on the user's surrounding environment elements, such as physical feature extraction technology, biometric recognition technology and image recognition processing technology for interactive control judgment; the biometric decision mechanism refers to The user's own biological indicators or features for decision-making mechanisms for interactive control decisions, such as the use of biometrics for interactive control decisions, and so on.
  • the physical feature extraction technology may include: a parallel map matching algorithm based on image physical features and a planar flow field topology simplification algorithm based on physical features;
  • the physical biometric identification technology may include: a template matching based recognition algorithm, based on Multi-modal biometric recognition algorithm with coefficient identification and feature fusion biometric recognition algorithm, image recognition processing technology can include: wavelet-based image matching algorithm, image local feature extraction algorithm, binocular matching and fusion algorithm and binocular measurement Distance/speed measurement algorithm, etc.
  • the multi-point decision-making mechanism refers to the decision-making mechanism composed of any combination of multiple single-point decision-making mechanisms based on the single-point decision-making mechanism, such as the access decision mechanism, the environment-aware decision-making mechanism and the bio-feature decision-making.
  • the intelligent simulation interaction decision mechanism, the admission decision mechanism and the intelligent life decision mechanism composed of the biometric decision mechanism are composed of mechanisms.
  • Step 204 Determine an action instruction corresponding to the output result according to the determined decision mechanism.
  • the target working mode is a specific specific scenario that the terminal enters after executing an instruction, and the corresponding function is executed in the scenario. For example, game mode (playing games in this scene), navigation mode (navigation in the scene), driving mode (user driving in this scene, terminal intelligent prompt related information), VR game mode (playing games in VR environment) , Friends Connect mode (interact with friends on social apps) and so on.
  • controlling the terminal to display the current motion picture, so that the motion picture currently displayed by the terminal can be performed in real time according to the operation of the user.
  • the behavior and the environment in which the user is located are adjusted. For example, when the user is playing a game, it suddenly rains outside. At this time, the currently displayed motion picture of the terminal can be adjusted to a rainy day scene, so that the user can better immerse in the game.
  • controlling the terminal to prompt the road driving information, so that the user can know the driving condition, the weather, and the current self in time.
  • the state and so on see the example in the above driving scenario.
  • the environmental state information includes: object element information and comfort information in the environment; and controlling the terminal to prompt the environment according to the object element information and the comfort information in the environment; information.
  • the fifth at least according to the muscle motion state data, the brain wave data, and the image data of the face in the feature data, obtaining biological state information, where the biological state information includes at least: biological motion state information and biological emotion state information;
  • the biological motion state information and the biological emotional state information control the terminal to display biometric related information.
  • Step 205 Execute the action instruction.
  • the output result and the corresponding decision mechanism are obtained according to the feature data, such as a combination of the biometric decision mechanism and the admission decision mechanism.
  • the multi-point decision mechanism determines the corresponding action command according to the output result of the feature data, such as the user's hitting strength, the ball speed, the direction, etc., and the action command can be used to control the ball to move according to the motion trajectory, thereby implementing intelligent interaction.
  • the execution result is obtained.
  • the state information of the ball landing or the sound feedback after the user hits the ball is obtained, and based on the execution result and the already stored data, such as After learning data and historical data, etc., provide feedback information for users. For example, let the user know which data needs to be corrected, whether it can be adjusted according to the user's preference, and the like.
  • the user may be prompted for living information (such as diet, work and rest, etc.); the user may be prompted for work information (such as time and place of work, etc.); For example, the girlfriend's online and offline information); prompt the user for data connection information (for example, switching from the cellular network to the wifi network); prompting the user for game progress information (for example, the current tennis is in the first few days, the current clearance game is played to the first few Controlling the terminal to prompt the user for road condition information (for example, whether there is slope, whether there is construction, etc.); prompting the user for weather information; prompting the user to rest (for example, the user plays for too long); and displaying according to the environmental characteristic data 3D virtual scenes (such as building virtual 3D, giving users a better gaming experience).
  • living information such as diet, work and rest, etc.
  • work information such as time and place of work, etc.
  • prompt the user for data connection information for example, switching from the cellular network to the wifi network
  • FIG. 4 shows the logical structure of the admission decision mechanism.
  • the admission decision mechanism includes three methods: iris recognition and/or fingerprint recognition, and on the basis of the two, assisting face recognition, and the above three and their mutual combination judgment mechanisms constitute the present The access decision mechanism of the interactive control system in the application. It should be understood that the admission decision mechanism may be similar to the user identity authentication referred to herein, and the same or corresponding technical features may be cited.
  • the present application collects feature data based on hardware sensors such as iris sensors, fingerprint sensors, single cameras and/or multiple cameras, such as iris images, fingerprint data, facial image data, etc., and through corresponding recognition technologies, such as iris recognition.
  • the technology, fingerprint recognition technology and face recognition technology respectively identify and preprocess, learn and correct the feature data, obtain the output result, and then determine the output result based on the admission decision mechanism, thereby generating and executing the action instruction. Intelligent interaction.
  • the present application obtains an output result capable of characterizing a user by collecting one or more of a user's iris image, fingerprint data, and face image data, and then after iris recognition, fingerprint recognition, and face recognition. And after determining the decision mechanism of the identity or payment function, determining whether to log in or pay the action instruction according to the output result and the decision mechanism, thereby performing actions such as identity authentication or bill payment.
  • Figure 5 shows the logical structure of the environment-aware decision-making mechanism.
  • the environment-aware decision-making mechanism is to analyze the characteristics of each element in the surrounding environment of the user through physical, biological and image processing techniques, and to construct a complex decision-making mechanism by constructing a complex environment through various elements and their mutual combination.
  • the following basic elements of the environment in the context-aware decision-making mechanism are determined based on individual feature data:
  • the basic elements in the environment are determined by image data, image depth data, direction data, and position data;
  • the motion state of an object in the environment is determined by image data, image depth data, and velocity data;
  • Environmental comfort index determined by temperature data, humidity data, odor taste data, object taste data
  • a relatively advanced decision basis is formed, such as involving prediction or emotional judgment, that is, combining the above three directions to perform interactive control determination. For example, based on the size, contour, position and direction of the basic objects in the environment, the auxiliary odor taste data and the object taste data are judged (such as the above-mentioned environmental basic elements, the object has many tables and chairs, many meals, it can be judged to be a restaurant, auxiliary The taste of the scent can be judged to be a delicious restaurant).
  • the present application collects feature data such as odor taste based on hardware sensors such as an olfactory sensor, a taste sensor, a depth sensor, a temperature sensor, a humidity sensor, a speed sensor, a position sensor, a direction sensor, an audio input device, a single camera, and a multi-camera.
  • object taste, image depth data, temperature data, humidity data, speed data, position data, direction data, and audio and video data and then analyze the basic elements of the user's surrounding environment through physical, biological, and image processing techniques.
  • the feature data through the recognition and preprocessing, learning and correction of the feature data, obtains the output results, and then determines the output results based on the environment-aware decision mechanism, thereby generating and executing the action instructions to realize the intelligent interaction.
  • the present application collects odor taste, object taste, image depth data, temperature data, humidity data, speed data, position data, direction data, and audio and video data of various objects in a restaurant environment such as a dining table, a dish, and the like.
  • a restaurant environment such as a dining table, a dish, and the like.
  • FIG. 6 shows the logical structure of the biometric decision mechanism.
  • the biometric decision-making mechanism analyzes the behavior patterns of organisms and objects through biological and image processing techniques, abstracts various characteristic elements of organisms and objects, and then combines various feature elements and their mutual combinations to form complex biological/object features. Decision making mechanism.
  • the basic elements of biological/object characteristics in the biometric decision-making mechanism are individually identified by muscle motion state data, brain wave data, image data, and image depth data, for example:
  • the ability of the creature to exercise is judged by the data related to the action intensity, reaction speed, motion and posture, and is the basis of the intelligent simulation of the motion scene. For example, based on the muscle motion state data, the motion intensity of the creature is determined; The muscle movement state data and the brain wave data determine the reaction speed of the creature; the motion and posture of the creature are determined based on the image data and the image depth data;
  • the preference of the creature for color is judged by the data related to emotions, colors, expressions, etc. in the above data. It is the basis for intelligently simulating food and shopping scenes. For example, based on brain wave data to determine the mood of the creature, the color preference; based on The image data determines the expression of the creature, etc.;
  • the speed of the biological stress response is judged by the data related to subtle movements, movements, emotions, eyeball trajectories, postures, expressions, etc., and is the basis for intelligently simulating sports, games, entertainment, and the like;
  • the motion coefficient of the object is judged by the data related to the object trajectory and the object velocity in the above data, and is the basis of the intelligent simulation scene;
  • Biological habits are judged by data related to actions, gestures, postures, expressions, subtle movements, and the like in the above data.
  • the present application collects feature data based on hardware sensors such as muscle sensing sensor, brain wave sensor, single camera, depth sensor and multi-camera, such as muscle motion state data, brain wave data, image data, image depth data, etc.
  • Biological and image processing techniques such as subtle motion determination, motion velocity determination, emotion determination, reaction speed determination, color determination, eyeball position determination, object tracking trajectory, eyeball tracking trajectory, motion determination, gesture determination, posture determination, expression determination, etc.
  • Analyze the feature data of various characteristic elements of the organism/object and obtain the output result by identifying, preprocessing, learning and correcting the feature data, and then determining the output results based on the biometric decision mechanism, thereby Perform intelligent simulation of the scene, such as performing simulation reconstruction on the user's batting scene, thereby generating and executing an action instruction, such as an instruction to generate a ball to move in a moving direction and a moving speed, the command is used to move the ball after the user hits the ball. Display to achieve intelligence Interaction.
  • the multi-point decision-making mechanism formed by the combination of two or more single-point decision-making mechanisms, it is based on single-point decision-making mechanisms such as access decision-making mechanism, environment-aware decision-making mechanism and bio-feature decision-making mechanism.
  • the logic implements a decision mechanism consisting of any combination of multiple single-point decision mechanisms. The following is an example of the application of the multipoint decision mechanism:
  • this application establishes a multi-point decision-making mechanism to form an intelligent simulation interactive decision-making system.
  • Intelligent analog interaction is the key development direction of the next generation of smart devices, involving most of the daily life of people in sports, entertainment, video, games, shopping, food and so on.
  • the decision mechanism involved in the intelligent analog interaction decision system may include: an admission decision mechanism and an environment perception decision mechanism.
  • the smart analog interaction system may be provided with a single camera, multiple cameras, Depth sensor, temperature sensor, humidity sensor, speed sensor, position sensor, direction sensor, iris/fingerprint sensor, etc., and then use image processing technology, physical basic data processing, iris/fingerprint recognition technology and face recognition technology to realize 3D virtual Features such as scene reconstruction, smart access, virtual interaction, and smart feedback are shown in Figure 8.
  • the virtual scene reconstruction refers to: establishing a high-performance, high-quality 3D virtual scene through complex judgments; intelligent access means that the user enters the virtual scene after being authenticated by the identity admission system; the virtual interaction refers to: powerful The biological/object feature decision-making mechanism ensures the accuracy and efficiency of the user's interaction with the environment; intelligent feedback means that the assistant decision-making system makes intelligent feedback and immerses the user in the virtual interactive world.
  • the virtual scene reconstruction it may include: motion scene reconstruction, game scene reconstruction, video scene reconstruction, entertainment scene reconstruction, shopping scene reconstruction, and food scene reconstruction, etc., based on the analysis of the characteristic data.
  • the result is determined by performing virtual reconstruction of the three-dimensional scene, including:
  • the identification of features in the scene such as the type, size, orientation, texture or material, state, etc. of the objects in the scene, for example, based on physical data such as image data, image depth data, position data, direction data, and velocity data in the feature data. All or part of the joint decision is made to identify the elements in the scene;
  • the acquisition of the scene environment parameters is jointly determined to achieve the acquisition of the scene environment parameters.
  • the motion scene is built, and the whole motion can be reconstructed based on all or part of the physical data such as image data, muscle motion state data, image depth data, direction data, position data, speed data, temperature data, and humidity data.
  • Scene generating and executing corresponding action instructions to implement interactive control;
  • Game scenes can be based on image data, muscle motion state data, brainwave data, image depth data, direction data, position data, speed data, temperature data, and humidity. All or part of the physical data such as data is jointly determined;
  • the entertainment interactive scene in the real entertainment scene may be based on all or part of physical data such as image data, muscle motion state data, brain wave data, image depth data, direction data, position data, speed data, temperature data, and humidity data. Joint determination;
  • the determination may be jointly performed based on all or part of physical data such as image data, audio and video data, image depth data, direction data, position data, speed data, and the like;
  • the item usage interaction scene in the shopping scene may be based on all or part of physical data such as image data, audio and video data, brain wave data, odor taste data, object taste data, image depth data, direction data, position data, and speed data. Joint determination;
  • the determination may be jointly performed based on all or part of physical data such as image data, image depth data, muscle state data, brain wave data, odor taste data, and object taste data.
  • the result of the above intelligent interactive control may trigger a biofeedback sensor to perform analog reconstruction of the intelligent feedback scene, and may be combined with all or part of the components such as the biosensor, the vibration sensor, the audio and video input and output device, and the data processor. , thereby reconstructing the intelligent feedback scene and providing feedback information for the corresponding virtual scene.
  • the application forms a smart living system through a multi-point decision system to provide users with optimal arrangements and suggestions for the user's work and life.
  • the smart living system makes the next generation of smart devices have certain wisdom, involving work, Scenes such as life.
  • the decision mechanism involved in the smart living system may include: an admission decision mechanism, a biometric decision mechanism, and an environment perception decision mechanism.
  • the smart living system may include: Entry, person status recognition, environmental status recognition, trip planning and learning memory, intelligent arrangement / suggestion several parts.
  • intelligent access means that after the user authenticates through the identity admission system, the intelligent living system is activated;
  • the state recognition of the person refers to: identifying the state of the person, including the state of both physical and mental aspects;
  • the identification of natural environments such as wind and rain, the comfort of the surrounding environment, the identification of noise pollution and light pollution in the area; intelligent arrangements/recommendations refer to: the decision-making system to make optimal arrangements and recommendations.
  • the human mental state recognition determination is jointly determined by image data and brain wave data
  • physiological type characteristic data such as a human magnetic field, various blood indexes, heart rate, body temperature, and learning and memory data;
  • the environmental scene recognition determination is jointly determined by auxiliary learning and memory data such as image data, temperature data, humidity data, speed data, position data, and direction data;
  • the intelligent suggestion determination is jointly determined by the above-described three types of determination results, the travel plan, the learning memory, and the like, thereby generating and executing the corresponding action command to realize the interactive control.
  • the terminal 12 is a schematic structural diagram of a terminal, and the terminal may include the following structure: a memory 1201 for storing data generated by an application and an application running.
  • the processor 1202 is configured to execute an application, to implement the following functions: acquiring feature data by using at least one sensor connected to the interaction control terminal, generating an action instruction according to the feature data and a decision mechanism of the control terminal, and executing the The action instruction.
  • the terminal may be a terminal device of a processor having a data processing and control function, such as a mobile phone or a VR glasses.
  • a data processing and control function such as a mobile phone or a VR glasses.
  • the system may include the following structure: at least one sensor 1301 for collecting at least one feature data; and a control terminal 1302 connected to the sensor for passing The at least one sensor 1301 acquires feature data, generates an action instruction according to the feature data and a decision mechanism of the control terminal 1302, and then executes the action instruction.
  • the sensor here is consistent with the control terminal and is an independent entity, and thus is illustrated as two or more devices and is classified into a system.
  • the senor may also be part of the control terminal, that is, the structure diagram of the terminal as shown in FIG. That is, at least one sensor is configured to acquire feature data, and a processor is configured to generate an action instruction according to the feature data and a decision mechanism of the terminal; and execute the action instruction.
  • FIG. 13 For the specific implementation of the function of the control terminal in FIG. 13 , reference may be made to FIG. 2 to FIG. 11 and the corresponding content in the foregoing, and the same or corresponding technical features may be mutually invoked, and details are not described herein again.
  • an embodiment of the present invention further provides an apparatus, including: an acquiring unit, configured to acquire feature data by using at least one sensor, where the feature data is data collected by the terminal by using the at least one sensor; And configured to generate an action instruction according to the feature data and a decision mechanism of the terminal; and an execution unit, configured to execute the action instruction.
  • an acquiring unit configured to acquire feature data by using at least one sensor, where the feature data is data collected by the terminal by using the at least one sensor
  • And configured to generate an action instruction according to the feature data and a decision mechanism of the terminal
  • an execution unit configured to execute the action instruction.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Optics & Photonics (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种控制方法、装置、终端、计算机存储介质和计算机程序产品。该方法通过至少一个传感器,获取特征数据,所述特征数据为所述终端通过所述至少一个传感器采集到的数据;根据所述特征数据以及所述终端的决策机制,生成动作指令;执行所述动作指令。本申请通过多个传感器采集多方面的特征数据之后进行数据分析,再基于相应的决策机制来生成相应的动作指令,实现交互控制,相对于现有技术中由于数据源限制造成交互控制准确性较差的情况,本申请通过增加数据源来改善这一缺陷,从多方面对交互控制进行决策判定,明显提高交互控制的准确性。

Description

一种控制方法、终端及***
本申请要求于2017年3月21日提交中国专利局、申请号为201710171128、8、发明名称为“一种基于用户的生物特征与动作识别的方法和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及数据处理领域,尤其涉及一种控制方法、终端及***。
背景技术
随着虚拟现实(Virtual Reality,VR)行业的兴起,人与虚拟场景的交互***的应用呈流行趋势,而随着智能设备的普及和人工智能技术的发展,更加深入更加全面的交互***必然是未来智能设备的发展方向。
而由于现有传感器技术的局限,使得交互***中数据源存在很大的限制,造成虚拟场景中交互***的真实性较差。例如,在VR眼镜中仅采用摄像头采集眼部的图像来实现视线追踪及智能交互,使得对视线的估计比较局限,造成VR眼镜的体验功能较差。
发明内容
有鉴于此,本申请提供了一种控制方法、终端及***,目的在于解决由于数据源存在限制造成交互***的准确性较差的技术问题。
本申请的第一方面提供了一种控制方法,适用于终端,其中包括以下步骤:通过至少一个传感器获取特征数据,根据所述特征数据以及所述终端的决策机制,生成动作指令,再执行所述动作指令,实现交互控制。可见,本申请通过多个传感器采集多方面的特征数据之后进行数据分析,再基于相应的决策机制来生成相应的动作指令,实现交互控制,相对于现有技术中由于数据源限制造成交互控制准确性较差的情况,本申请通过增加数据源来改善这一缺陷,从多方面对交互控制进行决策判定,明显提高交互控制的准确性。
第一方面第一种实现,所述特征数据至少包括用户的生物特征数据及环境特征数据;所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:根据所述生物特征数据中的至少一种,如脑电波数据、生物指标数据和肌肉运动数据,及环境特征数据中的至少一种,如温度数据、湿度数据、噪音数据、光线强度数据和空气质量数据,控制所述进入目标工作模式。或者,所述特征数据仅包括环境特征数据;所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:基于环境特征数据的至少一部分,控制所述终端进入目标工作模式。
第一方面第二种实现,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据;所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:至少根据所述生物特征数据中的脑电波数据和肌肉运动数据及所述环境特征数据中的温度数据、湿度数据、图像数据和声音数据,控制所述终端显示用户的当前运动画面。或者,所述特征数据仅包括环境特征数据;所述根据所述特征数据以及所述终端的决 策机制,生成动作指令,包括:基于环境特征数据的至少一部分,控制所述终端显示当前运动画面。
第一方面第三种实现,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据;所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:至少根据所述生物特征数据中用户的生物指标数据和脑电波数据,以及所述环境特征数据中的路面的图像数据、车辆的速度数据、温度数据、位置数据和湿度数据,控制所述终端提示路面驾驶信息。或者,所述特征数据仅包括环境特征数据;所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:基于环境特征数据的至少一部分,控制所述终端提示路面驾驶信息。
第一方面第四种实现,所述根据所述特征数据以及所述决策机制,生成动作指令,包括:至少根据所述特征数据中的温度数据、湿度数据、图像数据、图像深度数据、方向数据及位置数据,获得环境状态信息,所述环境状态信息包括:环境中物体要素信息及舒适度信息;根据所述环境中物体要素信息及舒适度信息,控制所述终端提示环境相关信息。
第一方面第五种实现,所述根据所述特征数据以及所述决策机制,生成动作指令,包括:至少根据所述特征数据中的肌肉运动状态数据、脑电波数据、脸部的图像数据,获得生物状态信息,所述生物状态信息至少包括:生物运动状态信息、生物情感状态信息;根据所述生物运动状态信息及生物情感状态信息,控制所述终端显示生物相关信息。
具体地,在第一方面的所有实现中,都可以传感器获取到的生物特征信息,对用户进行身份识别,例如基于指纹数据、虹膜数据或脸部数据中的一种或任意组合,对用户进行身份识别。在身份识别通过后,至少可以有两种操作。其一,在身份认证通过后,才执行上述生成指令的动作;其二,是在身份认证通过后,执行上述执行指令的动作。两者均可以,本方案不作限制。所述动作指令至少包括:控制所述终端发出语音、控制所述终端执行显示及控制所述终端触发某一应用的功能中的一种或多种。
具体地,在一种可能的实现种,根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:分析所述特征数据,得到输出结果;确定所述特征数据对应的决策机制;根据所述决策机制,确定所述输出结果对应的动作指令。所述分析所述特征数据,得到输出结果,包括:对所述特征数据进行数据源识别并分类;对分类后的特征数据采用相应的数据处理算法进行处理,得到输出结果。具体而言,可以是对所述生物特征数据采用生物识别算法进行要素识别,得到输出结果,所述输出结果至少包括:指纹识别结果、虹膜识别结果、脸部识别结果、生物运动状态识别结果中的一种或任意组合;对所述环境特征数据采用物理基础数据处理算法进行要素识别,得到输出结果,所述输出结果至少包括:环境内物体类型、尺寸、方位、材料、状态、环境温度及环境湿度识别结果中的一种或任意组合。
在一种可能的实现种,本方案还可以对所述输出结果的数据进行数据学习及数据修正。进一步地,将所述特征数据及经过学习和修正的输出结果进行存储;在执行所述动作指令之后,还包括基于执行所述动作指令之后所产生的执行结果与存储的数据, 生成反馈信息,便于提高下次输出结果的准确性。
第二方面,本申请的实施例提供一种终端,包括:处理器、存储器;该存储器用于存储计算机执行指令,该处理器与该存储器通过该总线连接,当终端运行时,该处理器执行该存储器存储的该计算机执行指令,以使终端执行上述任一项应用切换方法。
第三方面,本申请的实施例提供一种终端,包括:至少一个传感器,用于获取特征数据;处理器,用于根据所述特征数据以及所述终端的决策机制,生成动作指令;执行所述动作指令。
第四方面,本申请的实施例提供一种装置,包括:获取单元,用于通过至少一个传感器获取特征数据,所述特征数据为所述终端通过所述至少一个传感器采集到的数据;生成单元,用于根据所述特征数据以及所述终端的决策机制,生成动作指令;执行单元,用于执行所述动作指令。
第五方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当该指令在上述任一项终端上运行时,使得终端执行上述任一项应用切换方法。
第六方面,本申请实施例提供一种包含指令的计算机程序产品,当其在上述任一项终端上运行时,使得终端执行上述任一项应用切换方法。
第七方面,本申请实施例提供了一种控制***,其中包括以下结构:采集至少一个特征数据的传感器及控制终端,该控制终端通过所述至少一个传感器获取特征数据,根据所述特征数据以及所述控制终端的决策机制,生成动作指令,再执行所述动作指令,实现交互控制。可见,本申请通过设置多个传感器来采集多方面的特征数据之后进行数据分析,再基于相应的决策机制来生成相应的动作指令,实现交互控制,相对于现有技术中由于数据源限制造成交互控制准确性较差的情况,本申请通过增加数据源来改善这一缺陷,从多方面对交互控制进行决策判定,明显提高交互控制的准确性。
本申请的实施例中,上述终端的名字对设备本身不构成限定,在实际实现中,这些设备可以以其他名称出现。只要各个设备的功能和本申请的实施例类似,即属于本申请权利要求及其等同技术的范围之内。
另外,第二方面至第七方面中任一种设计方式所带来的技术效果可参见上述第一方面中不同设计方法所带来的技术效果,此处不再赘述。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中终端的架构图;
图2为本申请实施例提供的一种控制方法的流程图;
图3为本申请实施例的应用示例图;
图4为本申请实施例的应用示例图;
图5为本申请实施例的应用示例图;
图6为本申请实施例的应用示例图;
图7为本申请实施例的应用示例图;
图8为本申请实施例的应用示例图;
图9为本申请实施例的应用示例图;
图10为本申请实施例的应用示例图;
图11为本申请实施例的应用示例图;
图12为本申请实施例提供的一种控制终端的结构示意图;
图13为本申请实施例提供的一种控制***的结构示意图;
图14为本申请实施例的另一应用示例图;
图15为本申请实施例的应用场景示意图。
具体实施方式
本申请主要针对下一代智能终端,基于多传感器,结合生物医学、手势识别、图像技术等数据处理原理,将运动机器学习的方法作为辅助,进行复杂的单点多点决策的交互***。该***主要用于增强浸入式虚拟互动的体验,解决虚拟场景互动和反馈的真实性,同时增强现实中交互策略,能够给出更优的安排和建议,为用户改善工作和生活提供更好的辅助。在具体介绍本申请的方案之前,为便于理解,先示例两个适用本方案的场景。
其一,本方案在运动场景中的适用,以模拟网球运动为例。目前的VR运动型游戏中,多为简单的人机互动,而本实施例中注重于运动的真实体验性和运动效果。
在本实施例中的模拟网球运动***通过交互控制终端实现,如VR游戏终端等。交互控制终端包括相应的模块或部件,以使得所述交互控制终端执行如图14中所示的步骤:用户身份建立和智能准入、建立真实球场环境模型、建立真实击球模型、模拟浸入式运动、智能提醒和建议。具体如下:
1、用户身份建立和智能准入:交互控制终端设有内置存储器或外置存储器,预先存储有用户的身份认证信息。交互控制终端指纹传感器采集指纹数据(该指纹传感器可以是与交换控制终端通过WiFi或蓝牙连接的独立个体),或通过虹膜传感器采集虹膜数据,并将这些数据送入终端中的处理器。处理器基于这些数据比对已有数据库结果,若在数据库中存在与指纹数据或者虹膜数据相一致的数据,则进入相应的用户控件,如生成并执行开启球场界面的动作指令(相应的,如果在支付场景中,若存在相一致的数据,则生成并执行开启支付页面的动作指令,进入支付页面,或者生成并执行扣款的动作指令,完成支付功能等)。若不存在,则结合通过单摄像头或多摄像头采集到的人脸的图像数据对人脸进行识别,从而辅助识别用户身份。若依靠辅助识别的方法仍然不存在相应的用户,则生成并执行新建新用户空间的动作指令,执行新建用户账号的操作,在用户登录该账号后实现相应功能。
2、用户在进行身份登录之后,交互控制终端建立真实球场环境模型。例如,让用户来到自己喜爱的球场,通过聊天软件或在线约上自己的老友打一场真实的网球。
交互控制终端通过各种传感器采集图像数据、图像深度数据、位置数据、方向数据等,并由处理器经过分析,从而对真实的对球场地面进行图像识别,通过对比球场地面质地数据库中预存的各类球场质地数据,判定真实球场的球场质地,确定球场地面弹性系数,并基于此来建立网球运动模型。本实施例中的交互控制终端通过对整个球场进行图像识别,并结合图像深度数据,判定球场大小、边线位置、球网高度和位置、场馆布置物和位置等,再综合上述判定内容,建立真实球场的三维虚拟模型。
交互控制终端在建立真实击球模型过程中,包含以下过程:
1)采集温度数据、湿度数据、风速数据等物理数据,判定网球场馆所处的物理环境。
2)反复采集击球瞬间的肌肉状态数据如肌肉压力数据、击球的方向数据等,结合1)中所采集到的物理数据,根据机器学习的方法建立网球初始速度矢量判定模型,并在不断数据增大的过程中进行数据修正。
3)采集球运动图像数据、图像深度数据、速度数据,经过数据处理输出网球运动轨迹,结合2)的网球初速度和1)的物理数据及场地质地弹性系数,根据机器学习的方法联合判定网球运动模型,在反复数据采集下不断的修正模型。
3、模拟浸入式运动:
1)采集图像数据(如用户进行开球的图像数据)、肌肉状态数据,判定使用者体态(如击球方向)和力度。
2)将1)的结果输入到网球运动模型中,判定出网球初速度。
3)初速度结合网球运动模型,判定网球运动轨迹。
4)根据脑电波数据判定使用者的兴奋度,采集使用者眼球图像数据、模拟击球图像数据和肌肉状态数据,绘制眼球跟踪轨迹和模拟击球轨迹,判定使用者接球时的击球点及力度。
5)根据1)和4)做击球时的声音反馈,并全程进行视频模拟反馈。如根据网球速度,判定肌肉压力反馈强度。
4、智能提醒和建议:
1)结合物理数据和网球运动模型中的网球球速,对使用者进行运动建议和战术建议,如结合温度数据和湿度数据对球速的影响,球速是过快、一般,还是过慢。
2)风速情况,为使用者建议战术,如风速过大不宜发力等。
3)物理环境及使用者兴奋度,提醒使用者适合的运动时间和强度。
由此,该方案有效提升了用户在进行游戏时的交互性,使得用户能够更好的沉浸在游戏中,提升了游戏的体验。
其一,本方案在驾驶场景中的适用,例如本申请应用在智能行程中的示例进行说明,以开车去参加会议的行程模拟为例:
在本实施例中的行程模拟通过交互控制终端实现,如智能车载设备等,该交互控制终端可以包括:用户身份建立和智能准入、使用者的生理状态判定、使用者的精神状态判定、智能环境判定和智能安排、智能提醒和建议等几个功能组成:
1、用户身份建立和智能准入:交互控制终端通过传感器采集指纹数据或虹膜数据,并将数据送入终端中的处理器,在已有数据库结果中进行数据对比,若在书库中存在与这些数据相一致的数据,则进入相应的用户控件,如生产并执行开启驾驶模拟页面的指令;若不存在,处理器结合人脸的图像数据辅助识别,若仍然不存在相应的用户,则生成并执行新建新用户空间的动作指令,实现相应功能。
2、使用者在身份登录之后,确定一个开车去参加一个会议的行程:
1)交互控制终端通过传感器采集生理体征的相关特征数据(包括使用者磁场、血液各种指标、心率、体温等),送入终端的处理器,处理器基于相应的生物医院分析算法对数据进行处理,再结合机器学习使用者的身体历史记录,从而判定使用者生理状态(包括健康状态、疲劳度等等),并判定对用户行程的影响及相应的建议(如疲劳度非常高,不适宜驾驶)。
2)交互控制终端通过采集使用者脸部的图像数据及脑电波数据,由处理器对图像数据利用图像识别技术进行微表情识别,并对脑电波数据进行兴奋度识别分析,由此基于微表情及兴奋度联合判定相应的精神指标(如精神状态亢奋,宜谨慎驾驶)。
3)交互控制终端通过采集驾驶时路面的图像数据、车辆的速度数据,处理器对图像数据进行分析,利用图像识别技术进行路况环境识别,结合速度数据和行程数据,判定驾驶体验并给出相应建议(如路况良好,行程时间较紧,可以适当加快速度),如生成并执行显示建议信息的动作指令。
4)交互控制终端通过采集温度数据、湿度数据等物理数据,处理器对数据进行分析,判定环境对应的驾驶体验(如糟糕的环境,不适宜长时间驾驶)。
5)根据1)、2)、3)及4)的判定结果,处理器进行综合判定,智能建议使用者开车安全事项,开车速度,中途休息时间,以及可能错过行程的风险提醒等。
6)根据1)、2)、3)及4)的判定结果,处理器进行综合判定,改善驾驶环境,增强使用者体验感(如选择一些轻音乐,调节空调的温度等)。
由此,该方案有效提升驾驶的安全性,并在保障用户安全驾驶的基础上,有效提升了用户驾驶的舒适度。
本申请实施例提供的一种控制方法,可应用于手机、终端、增强现实(AR)\虚拟现实(VR)设备、平板电脑、笔记本电脑、超级移动个人计算机(UMPC)、上网本、个人数字助理(PDA)等任意终端上,当然,在以下实施例中,对该终端的具体形式不作任何限制。
如图1所示,本申请实施例中的终端可以为手机100。下面以手机100为例对实施例进行具体说明。应该理解的是,图示手机100仅是上述终端的一个范例,并且手机100可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。
如图1所示,手机100具体可以包括:处理器101、射频(RF)电路102、存储器103、触摸屏104、蓝牙装置105、一个或多个传感器106、Wi-Fi装置107、定位装置108、音频电路109、外设接口110以及电源***111等部件。 这些部件可通过一根或多根通信总线或信号线(图2中未示出)进行通信。本领域技术人员可以理解,图2中示出的硬件结构并不构成对手机的限定,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图1对手机100的各个部件进行具体的介绍:
处理器101是手机100的控制中心,利用各种接口和线路连接手机100的各个部分,通过运行或执行存储在存储器103内的应用程序,以及调用存储在存储器103内的数据,执行手机100的各种功能和处理数据。在一些实施例中,处理器101可包括一个或多个处理单元;举例来说,处理器101可以是华为技术有限公司制造的麒麟960芯片。在本申请一些实施例中,上述处理器101还可以包括指纹验证芯片,用于对采集到的指纹进行验证。
在本发明实施例中,处理器101还可以包括图形处理器(Graphics Processing Unit,GPU)115。其中,GPU 115是一种专门在个人电脑、工作站、游戏机和一些移动设备(如平板电脑、智能手机等)上进行图像运算工作的微处理器。它可将手机100所需要的显示信息进行转换驱动,并向显示器104-2提供行扫描信号,控制显示器104-2的正确显示。
具体的,在显示过程中,手机100可将相应的绘图命令发送给GPU 115,例如,该绘图命令可以为“在坐标位置(x,y)处画个长和宽为a×b大小的长方形”,那么,GPU 115根据该绘图指令便可以迅速计算出该图形的所有像素,并在显示器104-2上指定位置画出相应的图形。
需要说明的是,GPU 115可以以功能模块的形式集成在处理器101内,也可以以独立的实体形态(例如,显卡)设置在手机100内,本发明实施例对此不作任何限制。
射频电路102可用于在收发信息或通话过程中,无线信号的接收和发送。特别地,射频电路102可以将基站的下行数据接收后,给处理器101处理;另外,将涉及上行的数据发送给基站。通常,射频电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频电路102还可以通过无线通信和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯***、通用分组无线服务、码分多址、宽带码分多址、长期演进、电子邮件、短消息服务等。
存储器103用于存储应用程序以及数据,处理器101通过运行存储在存储器103的应用程序以及数据,执行手机100的各种功能以及数据处理。存储器103主要包括存储程序区以及存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等);存储数据区可以存储根据使用手机100时所创建的数据(比如音频数据、电话本等)。此外,存储器103可以包括高速随机存取存储器(RAM),还可以包括非易失存储器,例如磁盘存储器件、闪存器件或其他易失性固态存储器件等。存储器103可以存储各种操作***,例如,苹果公司所开发的
Figure PCTCN2017108458-appb-000001
操作***,谷歌公司所 开发的
Figure PCTCN2017108458-appb-000002
操作***等。上述存储器103可以是独立的,通过上述通信总线与处理器101相连接;存储器103也可以和处理器101集成在一起。
其中,手机100的随机存取存储器也可被称为内存或运行内存,手机100内安装的各应用在运行过程中均需要占用一定的内存运行应用相关程序。因此,当内存越大时手机100可以同时运行更多的应用,可更为迅速地运行各个应用,也可更加快速地在不同应用之间切换。
在本发明实施例中,当手机100的内存大小一定时,为了避免在后台运行的后台应用占用过多的手机内存,当手机100将前台应用切换至后台运行时,可释放该应用占用的部分或全部内存,使得应用切入后台运行后占用的内存减小,从而增加手机100可实际利用的运行内存,提高终端内各应用的运行速度。
触摸屏104具体可以包括触控板104-1和显示器104-2。
其中,触控板104-1可采集手机100的用户在其上或附近的触摸事件(比如用户使用手指、触控笔等任何适合的物体在触控板104-1上或在触控板104-1附近的操作),并将采集到的触摸信息发送给其他器件(例如处理器101)。其中,用户在触控板104-1附近的触摸事件可以称之为悬浮触控;悬浮触控可以是指,用户无需为了选择、移动或拖动目标(例如图标等)而直接接触触控板,而只需用户位于终端附近以便执行所想要的功能。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型来实现触控板104-1。
显示器(也称为显示屏)104-2可用于显示由用户输入的信息或提供给用户的信息以及手机100的各种菜单。可以采用液晶显示器、有机发光二极管等形式来配置显示器104-2。触控板104-1可以覆盖在显示器104-2之上,当触控板104-1检测到在其上或附近的触摸事件后,传送给处理器101以确定触摸事件的类型,随后处理器101可以根据触摸事件的类型在显示器104-2上提供相应的视觉输出。虽然在图2中,触控板104-1与显示屏104-2是作为两个独立的部件来实现手机100的输入和输出功能,但是在某些实施例中,可以将触控板104-1与显示屏104-2集成而实现手机100的输入和输出功能。可以理解的是,触摸屏104是由多层的材料堆叠而成,本申请实施例中只展示出了触控板(层)和显示屏(层),其他层在本申请实施例中不予记载。另外,触控板104-1可以以全面板的形式配置在手机100的正面,显示屏104-2也可以以全面板的形式配置在手机100的正面,这样在手机的正面就能够实现无边框的结构。
另外,手机100还可以具有指纹识别功能。例如,可以在手机100的背面(例如后置摄像头的下方)配置指纹识别器112,或者在手机100的正面(例如触摸屏104的下方)配置指纹识别器112。又例如,可以在触摸屏104中配置指纹采集器件112来实现指纹识别功能,即指纹采集器件112可以与触摸屏104集成在一起来实现手机100的指纹识别功能。在这种情况下,该指纹采集器件112配置在触摸屏104中,可以是触摸屏104的一部分,也可以以其他方式配置在触摸屏104中。本申请实施例中的指纹采集器件112的主要部件是指 纹传感器,该指纹传感器可以采用任何类型的感测技术,包括但不限于光学式、电容式、压电式或超声波传感技术等。
手机100还可以包括蓝牙装置105,用于实现手机100与其他短距离的终端(例如手机、智能手表等)之间的数据交换。本申请实施例中的蓝牙装置可以是集成电路或者蓝牙芯片等。
手机100还可以包括至少一种传感器106,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触摸屏104的显示器的亮度,接近传感器可在手机100移动到耳边时,关闭显示器的电源。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机100还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
Wi-Fi装置107,用于为手机100提供遵循Wi-Fi相关标准协议的网络接入,手机100可以通过Wi-Fi装置107接入到Wi-Fi接入点,进而帮助用户收发电子邮件、浏览网页和访问流媒体等,它为用户提供了无线的宽带互联网访问。在其他一些实施例中,该Wi-Fi装置107也可以作为Wi-Fi无线接入点,可以为其他终端提供Wi-Fi网络接入。
定位装置108,用于为手机100提供地理位置。可以理解的是,该定位装置108具体可以是全球定位***(GPS)或北斗卫星导航***、俄罗斯GLONASS等定位***的接收器。定位装置108在接收到上述定位***发送的地理位置后,将该信息发送给处理器101进行处理,或者发送给存储器103进行保存。在另外的一些实施例中,该定位装置108还可以是辅助全球卫星定位***(AGPS)的接收器,AGPS***通过作为辅助服务器来协助定位装置108完成测距和定位服务,在这种情况下,辅助定位服务器通过无线通信网络与终端例如手机100的定位装置108(即GPS接收器)通信而提供定位协助。在另外的一些实施例中,该定位装置108也可以是基于Wi-Fi接入点的定位技术。由于每一个Wi-Fi接入点都有一个全球唯一的MAC地址,终端在开启Wi-Fi的情况下即可扫描并收集周围的Wi-Fi接入点的广播信号,因此可以获取到Wi-Fi接入点广播出来的MAC地址;终端将这些能够标示Wi-Fi接入点的数据(例如MAC地址)通过无线通信网络发送给位置服务器,由位置服务器检索出每一个Wi-Fi接入点的地理位置,并结合Wi-Fi广播信号的强弱程度,计算出该终端的地理位置并发送到该终端的定位装置108中。
音频电路109、扬声器113、麦克风114可提供用户与手机100之间的音频接口。音频电路109可将接收到的音频数据转换后的电信号,传输到扬声器113,由扬声器113转换为声音信号输出;另一方面,麦克风114将收集的声音信号转换为电信号,由音频电路109接收后转换为音频数据,再将音频数据输 出至RF电路102以发送给比如另一手机,或者将音频数据输出至存储器103以便进一步处理。
外设接口110,用于为外部的输入/输出设备(例如键盘、鼠标、外接显示器、外部存储器、用户识别模块卡等)提供各种接口。例如通过通用串行总线(USB)接口与鼠标连接,通过用户识别模块卡卡槽上的金属触点与电信运营商提供的用户识别模块卡(SIM)卡进行连接。外设接口110可以被用来将上述外部的输入/输出***设备耦接到处理器101和存储器103。
手机100还可以包括给各个部件供电的电源装置111(比如电池和电源管理芯片),电池可以通过电源管理芯片与处理器101逻辑相连,从而通过电源装置111实现管理充电、放电、以及功耗管理等功能。
尽管图1未示出,手机100还可以包括摄像头(前置摄像头和/或后置摄像头)、闪光灯、微型投影装置、近场通信(NFC)装置等,在此不再赘述。
图1所示的终端可应用于图15中,图15为本发明终端与用户的交互控制的示意图。其中,终端与使用者之间进行交互控制,终端采集使用者的输入、生理特征参数等,对数据进行处理生成决策后,提供交互决策的结果。该终端在进行交互控制时主要分为三个功能阶段:获得特征数据的数据源采集阶段、对特征数据进行分析的数据处理及根据处理结果进行交互控制的交互决策。需要说明的是,本申请文件中,终端、交互终端、交互控制终端、移动终端等术语等意,其具体可以为VR眼镜、车载设备、手机等智能终端。
图2为交互控制终端在实现交互控制时的具体流程图,如图所示,该交互控制方法可以包括以下步骤:
步骤201:利用至少一个传感器,获得至少一个特征数据。
其中,步骤201即为交互控制终端的数据源采集阶段,具体可以通过多个硬件传感器实现,这些传感器可以设置在固定位置或者佩戴在使用者身体上,用以准确感知并采集到准确的特征数据,由本申请中的交互控制终端获得。
而终端在利用硬件传感器进行数据采集时,主要包括如图3中所示的传感器:生物类型传感器、物理类型传感器、生物反馈传感器、图像采集模块及音频视频输入输出设备等,本申请中利用这些传感器中的一个或多个来获得特征数据。
在一种实现中,生物类型传感器中可以包括有以下一种或多种传感器,获得相应的用户的生物特征数据:
虹膜传感器,可以佩戴在用户头部,或者设置在固定位置,用于采集用户的虹膜图像,具体可以为:虹膜摄像头等;
指纹传感器,用于采集用户的指纹数据,具体可以为:指纹采集器;
嗅觉传感器,用于通过向用户提供多种气味的味道数据样本,来辨别用户所感知到的气味味道;
味觉传感器,用于通过向用户提供多种物体如食物的味道数据样本,如“酸甜苦 辣咸”等,来辨别用户所感知到的物体味道;
肌肉感知传感器,可以黏贴或者绑定在用户的肌肉位置,如肱二头肌上,用于对用户的肌肉运动进行感知,例如,采集用户的肌肉运动数据,并转化成电信号数据,来表征用户的肌肉运动状态;
脑电波传感器,可以贴敷在用户的头部,用于采集用户的脑电波数据,并将脑电波数据进行输出,具体可以为脑电波芯片等;
生理特征传感器,用于采集用户的各种生物指标数据,具体可以为:血压传感器(绑定在用户胳膊上)、心率传感器(设置在用户脖子或者手指上)、呼吸频率传感器(能感受用户呼吸频率并转换成可用输出信号的传感器)等。
在一种实现中,物理类型传感器中可以包含有以下一种或多种传感器,获得用户的环境特征数据及运动特征数据:
深度传感器,用于采集图像深度数据;
温度传感器,用于采集周边环境的温度数据;
湿度传感器,用于采集周边环境的湿度数据;
声音传感器,用于采集周边环境中的噪音数据;
光线传感器,用于采集周边环境中的光线强度数据;
空气质量传感器,用于采集周边环境的空气质量数据,如PM2.5的含量等;
速度传感器,用于采集目标物如用户的各种速度数据,具体可以为:有线速度传感器、角速度传感器等;
位置传感器,如全球定位***(Global Positioning System,GPS)、北斗卫星导航***(BeiDou Navigation Satellite System,BDS)等,用于采集目标物的位置数据,具体可以为:直线位移传感器、角位移传感器等;
方向传感器,用于采集方向数据。
其中的生物反馈传感器主要用于产生模拟的用户的反馈数据,如压力反馈(使用者感知到压力后的反馈数据)、振动反馈、模拟气味反馈等等。
在一种实现中,图像采集模块可以理解为相机模块,具体可以通过单摄像头或者多摄像头实现,用于采集图像数据。
而音频视频输入输出设备主要用于采集和显示音频和/或视频数据。
由此,本申请中通过以上传感器所获得的特征数据可以包含有以下一种或多种:虹膜图像、指纹数据、气味味道、物体味道、肌肉运动状态、脑电波数据、血压、心率、呼吸频率、图像深度数据、温度数据、湿度数据、速度数据、位置数据、方向数据、图像数据、音频视频数据等。
需要说明的是,终端与传感器之间可以通过WiFi或蓝牙等无线连接方式进行连接,或者,也可以通过串口数据线等有线连接方式进行连接。
步骤202:分析特征数据,得到能够表征交互状态的输出结果。
例如,输入结果是在终端分析特征数据的基础上,能够表征当前交互状态的结果,例如,例如上述网球示例中提到的用户与终端之间进行的游戏交互的数据,包括用户的肌肉运动状态、用户的脑电波活动状态、用户的各种生物指标状态如心率血压呼吸 等。或者,输入结果是在终端分析特征数据的基础上,能够表征当前交互过程或交互结论的信息,例如上述驾驶示例中提到的交互数据,包括环境中目标物的位置状态、移动方向和速度状态、温度及湿度状态等。
其中,本实施例中在实现特征数据分析时,可以包含:数据源识别与处理、数据学习、数据修正几个阶段,主要对之前所获得到的特征数据进行预处理和修正,为交互决策阶段提供输入依据。
在数据源识别与处理阶段,本申请可以通过生物技术识别、生物医学分析及图像处理等算法对特征数据进行数据源识别、数据过滤、数据归一化等处理,得到输出结果。以图像处理为例,通过对特征数据进行颜色识别、眼球识别与跟踪、动作识别、体态识别、表情微表情识别及目标跟踪等处理来得到输出结果。
在数据学习阶段,本申请可以通过机器学习的理论对真实场景进行学习和记忆,为后续的数据修正等阶段提供辅助信息。
在数据修正阶段,本申请根据数据学习阶段所输出的辅助信息,对数据源识别与处理阶段所得到的输出结果进行修正,以提高输出结果的准确性。例如对能够进行3D场景模拟的输出结果进行智能修正,以提高后续建立的3D场景的准确性。
在一种实现方式中,本申请可以在通过各种传感器采集到原始的特征数据之后,首先对特征数据进行数据源识别并分类,进而对不同的特征数据采用不同的数据处理算法进行数据处理,例如,按照特征数据的数据类型进行分类,将不同数据类型的特征数据输出给不同的数据处理算法,如生物技术识别算法、生物医学分析算法及图像处理算法(颜色识别、眼球识别与跟踪、动作识别、体态识别、表情微表情识别及目标跟踪等)等;
之后,对于不同的特征数据采用相应的数据处理算法进行数据源识别、数据过滤(提取)及数据归一化(抽象)等处理,得到输出结果;
其次,利用机器学习所得到的辅助信息对输出结果进行数据修正,输出矫正过的结果作为后续交互决策阶段的输入;
另外,数据修正所输出的数据也可以为数据学习阶段提供进一步的数据基础或素材,使得数据学习可以得到进一步的完善。
另外,在步骤202中分析特征数据时,还可以包括:数据存储、数据建模及决策反馈等几个阶段。
其中,数据存储是指,将通过传感器所采集到的各类特征数据进行存储,将数据学习所得到的结果及各种历史信息等进行存储,等等,由此为后续的决策机制的反馈提供数据支持;
数据建模及决策反馈是指,根据后续的决策机制和数据修正后的数据,进行三维场景重建与增强,从而智能模拟各类场景中的各种互动要素,并根据数据学习及数据存储所得到的数据,对决策机制所产生的决策结果做出相应的反馈输出,也就是说,基于决策结果与存储的数据为用户提供反馈信息。
在一种实现方式中,本实施例在进行反馈输出时,可以将反馈输出给反馈传感器,这里的反馈传感器,如上述各种传感器。
步骤203:确定特征数据对应的决策机制。
其中,本申请中的决策机制可以分为:单点决策机制和多点决策机制,单点决策机制可以为:准入决策机制,通过采集用户特征,、环境感知决策机制、人的生物,动作,肌肉反应,生物特征决策机制等,准入决策机制是指对用户身份进行识别或验证的准入机制。
其中,准入决策机制是其他单点决策机制及多点决策机制的基础,如对用户身份验证等进行交互控制判定的基本决策,如利用指纹识别算法、虹膜识别算法及脸部识别算法等进行交互控制判定;环境感知决策机制是指根据用户周边环境要素进行交互控制判定的决策机制,如利用物理特征提取技术、生物识别技术和图像识别处理技术进行交互控制判定;生物特征决策机制是指根据用户自身的生物指标或特征进行交互控制判定的决策机制,如利用生物识别技术进行交互控制判定,等等。
其中,物理特征提取技术中可以包括有:基于图像物理特征的并行地图匹配算法及基于物理特征的平面流场拓扑简化算法等;物理生物识别技术中可以包括有:基于模板匹配的识别算法、基于系数标识的多模态生物特征识别算法及特征融合生物特征识别算法等,图像识别处理技术中可以包括:基于小波的图像匹配算法、图像局部特征提取算法、双目匹配及融合算法及双目测距/测速算法等。
而多点决策机制是指在单点决策机制的基础上,通过复杂的逻辑实现多个单点决策机制的任意组合所构成的决策机制,如准入决策机制与环境感知决策机制及生物特征决策机制组成的智能模拟交互决策机制、准入决策机制与生物特征决策机制组成的智能生活决策机制等。
步骤204:根据确定的决策机制,确定所述输出结果对应的动作指令。
具体地,可包括如下几种场景:
其一,基于环境特征数据的至少一部分,或者,基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端进入目标工作模式。其中,所述目标工作模式为终端在执行一个指令后进入的一个具体特定的场景,在该场景下执行相应的功能。例如游戏模式(在该场景下打游戏)、导航模式(在该场景下导航)、驾驶模式(用户在该场景下驾驶,终端智能提示相关信息)、VR游戏模式(在VR环境中玩游戏)、好友互联模式(与社交APP上的好友连线互动)等等。
其二,基于环境特征数据的至少一部分,或者,基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端显示当前运动画面,使得终端当前显示的运动画面能够实时根据用户的操作行为以及用户所处的环境进行调整。例如,用户在玩游戏,突然外面下雨了,此时终端当前显示的运动画面可调整为雨天场景下,让用户能够更好的沉浸在游戏中。
其三,基于环境特征数据的至少一部分,或者,基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端提示路面驾驶信息,使得用户能够及时获知驾驶的路况、天气、当前自身所处的状态等等。具体可参见上述驾驶场景下的示例。
其四,至少根据所述特征数据中的温度数据、湿度数据、图像数据、图像深度数 据、方向数据及位置数据,获得环境状态信息,所述环境状态信息包括:环境中物体要素信息及舒适度信息;根据所述环境中物体要素信息及舒适度信息,控制所述终端提示环境相关信息。具体可参考如下实施例。
其五,至少根据所述特征数据中的肌肉运动状态数据、脑电波数据、脸部的图像数据,获得生物状态信息,所述生物状态信息至少包括:生物运动状态信息、生物情感状态信息;根据所述生物运动状态信息及生物情感状态信息,控制所述终端显示生物特征相关信息。具体可参考如下实施例。
步骤205:执行所述动作指令。
例如,在运动场景中,采集用户击球过程中的肌肉状态数据、图像数据等特征数据之后,根据这些特征数据获得输出结果及相应的决策机制,如生物特征决策机制和准入决策机制组合的多点决策机制,从而根据特征数据的输出结果如用户击球力度、球速、方向等确定对应的动作指令,该动作指令可以用于控制球按照运动轨迹进行运动,由此实现智能交互。
在终端执行动作指令之后,获取执行结果,如控制球按照运动轨迹进行运动之后,获得球落地的状态信息或用户击球后的声音反馈等,并基于这一执行结果与已经存储的数据,如经过学习后的数据和历史数据等,为用户提供反馈信息。例如,让用户知晓哪些数据是需要修正的,后续是否可以根据用户的偏好进行调整等。
需要说明的是,除终端执行上述操作外,还可以为用户提示生活信息(例如饮食、作息等);为用户提示工作信息(例如上下班时间点、地点等);为用户提示好友上线信息(例如女朋友的上下线信息);为用户提示数据连接信息(例如由蜂窝网络切换到wifi网络);为用户提示游戏进程信息(例如当前的网球处于第几节、当前的通关游戏玩到第几关);控制所述终端为用户提示路况信息(例如是否有坡度、是否有施工等);为用户提示天气信息;提示用户休息(例如用户游戏时间过长);展示根据所述环境特征数据生成的三维虚拟场景(例如构建虚拟3D,让用户有更好的游戏体验)。
图4所示为准入决策机制的逻辑结构图。如图所示,准入决策机制包括三种方式:虹膜识别和/或指纹识别、以及在此二者基础上辅助以脸部识别的方式,以上三种以及其相互组合的判定机制构成了本申请中交互控制***的准入决策机制。应当理解,该准入决策机制可类似于文中提及的用户身份认证,其相同或相应的技术特征可相互援引。
也就是说,本申请基于虹膜传感器、指纹传感器、单摄像头和/或多摄像头等硬件传感器采集特征数据,如虹膜图像、指纹数据、脸部图像数据等,并通过相应的识别技术,如虹膜识别技术、指纹识别技术及脸部识别技术分别对特征数据进行识别及预处理、学习及修正等操作,得到输出结果,再对输出结果基于准入决策机制进行判定,从而生成并执行动作指令,实现智能交互。
例如,本申请通过采集用户的虹膜图像、指纹数据和脸部图像数据中的一种或多种,再经过虹膜识别、指纹识别及脸部识别之后,得到能够表征用户身份的输出结果, 并在确定身份识别或支付功能的决策机制之后,根据输出结果及决策机制来确定是否登录或者支付的动作指令,从而进行身份认证或者账单支付等动作。
图5所示为环境感知决策机制的逻辑结构图。环境感知决策机制是通过物理、生物和图像处理的技术,分析用户所在周边环境中各个要素的特征,并通过各个要素及其相互的组合来构成复杂环境进而构建感知的决策机制。环境感知决策机制中环境的以下基本要素基于各个特征数据判定:
环境中的基本要素如物体的大小、轮廓、位置、方向,由图像数据、图像深度数据、方向数据、位置数据来判定;
环境中物体的运动状态,由图像数据、图像深度数据及速度数据来判定;
环境舒适度指数,由温度数据、湿度数据、气味味道数据、物体味道数据判定;
以上基本要素、基本要素的运动状态及环境舒适度指数进行组合之后,组成相对高级的判定基础,如涉及预测或情感上的判定等,也就是说,基于以上三个方向来组合进行交互控制判定,例如:基于环境中基本物体的大小、轮廓、位置和方向,辅助气味味道数据和物体味道数据判定(如上述环境基本要素中物体有很多桌椅、很多饭菜,则可以判定是一家餐厅,辅助气味味道,可以判断是一家美味的餐厅)。
也就是说,本申请基于嗅觉传感器、味觉传感器、深度传感器、温度传感器、湿度传感器、速度传感器、位置传感器、方向传感器、音频输入设备、单摄像头和多摄像头等硬件传感器采集特征数据,如气味味道、物体味道、图像深度数据、温度数据、湿度数据、速度数据、位置数据、方向数据及音频视频数据等,再通过物理、生物和图像处理的技术,分析出用户所在周边环境中各个基本要素的特征数据,并通过对特征数据进行识别及预处理、学习及修正等操作,得到输出结果,再对这些输出结果基于环境感知决策机制进行判定,从而生成并执行动作指令,实现智能交互。
例如,本申请通过采集餐厅环境中的各物体如餐桌、菜品等的气味味道、物体味道、图像深度数据、温度数据、湿度数据、速度数据、位置数据、方向数据及音频视频数据等,再通过物理、生物和图像处理的技术,分析数据餐厅环境中餐桌、餐椅、服务生、菜品、厨师等的各种特征数据,并通过对特征数据进行识别、预处理、学习及修正等操作,得到输出结果,再对这些输出结果进行三维模拟重建并进行决策机制判定,得到这是一家非常美味的餐厅的判定结果,从而生成显示餐厅菜单的指令,为用户提供点单交互服务。
图6所示为生物特征决策机制的逻辑结构图。生物特征决策机制是通过生物和图像处理的技术,分析生物和物体的行为模式,抽象出生物和物体的各种特征要素,再通过各个特征要素及其相互的组合构成的复杂生物/物体特征的决策机制。生物特征决策机制中生物/物体特征的基本要素,由肌肉运动状态数据、脑电波数据、图像数据、图像深度数据单独标识,例如:
生物的运动能力,由上述数据中的与动作力度、反应速度、动作及体态等相关的数据判断,是智能模拟运动场景的基础,例如,基于肌肉运动状态数据,判断出生物的动作力度;基于肌肉运动状态数据及脑电波数据判断出生物的反应速度;基于图像数据及图像深度数据判断出生物的动作及体态等;
生物对颜色的偏好,由上述数据中的与情绪、颜色、表情等相关的数据判断,是智能模拟美食、购物场景的基础,例如,基于脑电波数据判断出生物的情绪,对颜色偏好;基于图像数据判断出生物的表情等;
生物的应激反应速度,由上述数据中与细微动作、动作力度、情绪、眼球轨迹、体态、表情等相关的数据判断,是智能模拟运动、游戏、娱乐等场景的基础;
物体的运动系数,由上述数据中与物体轨迹、物体速度等相关的数据判断,是智能模拟场景的基础;
生物习惯,由上述数据中与动作、手势、体态、表情、细微动作等相关的数据判断。
也就是说,本申请基于肌肉感知传感器、脑电波传感器、单摄像头、深度传感器及多摄像头等硬件传感器采集特征数据,如肌肉运动状态数据、脑电波数据、图像数据、图像深度数据等,再通过生物和图像处理的技术,如细微动作判定、动作力度判定、情绪判定、反应速度判定、颜色判定、眼球位置判定、物体跟踪轨迹、眼球跟踪轨迹、动作判定、手势判定、体态判定及表情判定等,分析出生物/物体的各种特征要素的特征数据,并通过对特征数据进行识别及预处理、学习及修正等操作,得到输出结果,再对这些输出结果基于生物特征决策机制进行判定,从而进行场景的智能模拟,如对用户击球场景进行模拟重建,从而生成并执行动作指令,如生成球以运动方向及运动速度进行移动的指令,该指令用于将用户击球后球的运动状态进行显示,实现智能交互。
而对于由两个或更多个单点决策机制所组合形成的多点决策机制,是在单点决策机制如准入决策机制、环境感知决策机制及生物特征决策机制等的基础上,通过复杂的逻辑实现多个单点决策机制的任意组合所构成的决策机制。以下对多点决策机制的应用进行举例说明:
其一,本申请在单点决策机制的基础上,建立多点决策机制,形成智能模拟交互决策***。智能模拟交互是下一代智能设备的重点发展方向,涉及到运动、娱乐、视频、游戏、购物、美食等人们日常生活的大部分场景。
在本实施例中智能模拟交互决策***所涉及的决策机制可以包含有:准入决策机制、环境感知决策机制,如图7中所示,智能模拟交互***中可以设置有单摄像头、多摄像头、深度传感器、温度传感器、湿度传感器、速度传感器、位置传感器、方向传感器、虹膜/指纹传感器等,进而相应采用图像处理技术、物理基础数据处理、虹膜/指纹识别技术及脸部识别技术等实现三维虚拟场景重建、智能准入、虚拟互动及智能反馈等功能,如图8中所示。其中,虚拟场景重建是指:通过复杂的判定建立高性能、高质量的3D虚拟场景;智能准入是指:使用者通过身份准入***验证后,进入虚拟场景;虚拟互动是指:强大的生物/物体特征决策决策机制确保使用者与环境的交互的准确性和效率性;智能反馈是指:辅助决策***做出智能反馈,让使用者沉浸于虚拟互动的世界。
在虚拟场景重建中,可以包含有:运动场景重建、游戏场景重建、视频场景重建、娱乐场景重建、购物场景重建及食物场景重建等,具体基于经过特征数据分析后的输 出结果进行三维场景虚拟重建的判定,包括有:
场景内要素如场景内物体的类型、尺寸、方位、质地或材料、状态等的识别,例如,基于特征数据中的图像数据、图像深度数据、位置数据、方向数据及速度数据等物理数据中的全部或者部分联合进行判定,对场景内要素进行识别;
场景环境参数的获取,例如,基于温度数据、湿度数据等全部或部分联合进行判定,实现场景环境参数的获取。
其中,如图9中所示:
运动场景建中,可以基于图像数据、肌肉运动状态数据、图像深度数据、方向数据、位置数据、速度数据、温度数据及湿度数据等物理数据中的全部或部分联合进行判定,从而重建真实的运动场景,生成并执行相应的动作指令,实现交互控制;
游戏场景(涉及游戏交互运动,如虚拟现实眼镜中击球场景)中,可以基于图像数据、肌肉运动状态数据、脑电波数据、图像深度数据、方向数据、位置数据、速度数据、温度数据及湿度数据等物理数据中的全部或部分联合进行判定;
真实娱乐场景中的娱乐互动场景中,可以基于图像数据、肌肉运动状态数据、脑电波数据、图像深度数据、方向数据、位置数据、速度数据、温度数据及湿度数据等物理数据中的全部或部分联合进行判定;
购物场景中物体展示模型的虚拟场景中,可以基于图像数据、音频视频数据、图像深度数据、方向数据、位置数据、速度数据等物理数据中的全部或部分联合进行判定;
购物场景中物品使用交互场景中,可以基于图像数据、音频视频数据、脑电波数据、气味味道数据、物体味道数据、图像深度数据、方向数据、位置数据及速度数据等物理数据中的全部或部分联合进行判定;
食物场景中,可以基于图像数据、图像深度数据、肌肉状态数据、脑电波数据、气味味道数据、物体味道数据等物理数据中的全部或部分联合进行判定。
其中,以上各个独立的虚拟场景或任意组合的虚拟场景中,在经过决策机制判定之后,生成并执行相应的动作指令,实现交互控制。
而上述智能交互控制的结果可以触发生物反馈传感器等来进行智能反馈场景的模拟重建,具体可以结合生物传感器、振动传感器、音频视频输入输出设备、数据处理器等组件中的全部或部分联合进行判定,从而重建智能反馈场景,为相应的虚拟场景提供反馈信息。
其二,本申请通过多点决策***,形成智能生活***为用户提供最优的安排和建议,以便于用户的工作和生活,智能生活***使下一代智能设备具有一定的智慧,涉及到工作、生活等方方面面的场景。
在本实施例中智能生活***所涉及到的决策机制可以包含有:准入决策机制、生物特征决策机制、环境感知决策机制,如图10中所示,智能生活***中可以包括有:智能准入、人的状态识别、环境状态识别、行程计划和学习记忆、智能安排/建议几个部分。其中,智能准入是指:使用者通过身份准入***验证后,启动智能living***;人的状态识别是指:识别人的状态,包括生理和精神两个方面的状态;环境状态识别 是指:对诸如风雨雷电自然环境、周围环境舒适度、区域内噪声污染和光污染的识别等;智能安排/建议是指:辅助决策***做出最优安排和建议。
如图11中所示,人的精神状态识别判定,由图像数据、脑电波数据进行联合判定;
人的生理状态判定,由生理类型的特征数据如人的磁场、血液各种指标、心率、体温等和学习记忆数据联合判定;
环境场景识别判定,由图像数据、温度数据、湿度数据、速度数据、位置数据、方向数据等辅助学习记忆数据联合进行判定;
智能建议判定,由上述3种判定结果、行程计划、学习记忆等辅助模块联合判定,由此,生成并执行相应的动作指令,实现交互控制。
图12为终端的结构示意图,该终端可以包括以下结构:存储器1201,用于存储应用程序及应用程序运行所产生的数据。处理器1202,用于执行应用程序,以实现以下功能:利用与交互控制终端连接的至少一个传感器获取特征数据,根据所述特征数据以及所述控制终端的决策机制,生成动作指令,再执行所述动作指令。
终端可以为手机、VR眼镜等具有数据处理及控制功能的处理器的终端设备。而终端中各结构的功能的实现具体方式可以参见图2~图11及前文相应内容所示,其相同或相应的技术特征可相互援用,此处不再赘述。
图13为包含图12所示的控制终端的控制***的结构示意图,该***可以包括以下结构:至少一个传感器1301,用于采集至少一个特征数据;控制终端1302,与传感器相连接,用于通过所述至少一个传感器1301获取特征数据,根据所述特征数据以及所述控制终端1302的决策机制,生成动作指令,再执行所述动作指令。需要说明的是,此处传感器与控制终端一致,都是独立的实体,因而示意为两个以以上装置,并归类为***中。
在实际的产品中,传感器也可能是控制终端的一部分,即如图1所述的终端的结构示意图。即,至少一个传感器,用于获取特征数据;处理器,用于根据所述特征数据以及所述终端的决策机制,生成动作指令;执行所述动作指令。
图13中的控制终端的功能的实现具体方式可以参见图2~图11及前文相应内容所示,其相同或相应的技术特征可相互援用,此处不再赘述。
此外,本发明实施例还提供了一种装置,包括:获取单元,用于通过至少一个传感器获取特征数据,所述特征数据为所述终端通过所述至少一个传感器采集到的数据;生成单元,用于根据所述特征数据以及所述终端的决策机制,生成动作指令;执行单元,用于执行所述动作指令。该装置的功能的实现具体方式可以参见图2~图11及前文相应内容所示,其相同或相应的技术特征可相互援用,此处不再赘述。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。
在上述实施例中,可以全部或部分的通过软件,硬件,固件或者其任意组 合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式出现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘,硬盘、磁带)、光介质(例如,DVD)或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (45)

  1. 一种控制方法,应用于终端,其特征在于,所述方法包括:
    通过至少一个传感器获取特征数据,所述特征数据为所述终端通过所述至少一个传感器采集到的数据;
    根据所述特征数据以及所述终端的决策机制,生成动作指令;
    执行所述动作指令。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:
    基于环境特征数据的至少一部分,控制所述终端进入目标工作模式,所述特征数据包括所述环境特征数据;或者,
    基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端进入目标工作模式,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:
    基于环境特征数据的至少一部分,控制所述终端显示当前运动画面,所述特征数据包括所述环境特征数据;或者,
    基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端显示当前运动画面,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:
    基于环境特征数据的至少一部分,控制所述终端提示路面驾驶信息,所述特征数据包括所述环境特征数据;或者,
    基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端提示路面驾驶信息,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:
    至少根据所述特征数据中的温度数据、湿度数据、图像数据、图像深度数据、方向数据及位置数据,获得环境状态信息,所述环境状态信息包括:环境中物体要素信息及舒适度信息;
    根据所述环境中物体要素信息及舒适度信息,控制所述终端提示环境相关信息。
  6. 根据权利要求1所述的方法,其特征在于,所述根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:
    至少根据所述特征数据中的肌肉运动状态数据、脑电波数据、脸部的图像数据,获得生物状态信息,所述生物状态信息至少包括:生物运动状态信息、生物情感状态信息;
    根据所述生物运动状态信息及生物情感状态信息,控制所述终端显示生物特征相关信息。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述终端通过所述至少一个传感器获取了用户的生物特征数据,在执行所述动作指令之前,所述方法还包括:
    基于所述生物特征数据,对用户进行身份识别;
    所述指令所述动作指令,包括:
    若用户身份认证通过,所述终端执行所述动作指令。
  8. 根据权利要求2-4任一所述的方法,其特征在于,所述方法还包括至少一个步骤:
    控制所述终端为用户提示生活信息;
    控制所述终端为用户提示工作信息;
    控制所述终端为用户提示好友上线信息;
    控制所述终端为用户提示数据连接信息;
    控制所述终端为用户提示游戏进程信息;
    控制所述终端为用户提示路况信息;
    控制所述终端为用户提示天气信息;
    控制所述终端提示用户休息;
    控制所述终端展示根据所述环境特征数据生成的三维虚拟场景。
  9. 根据权利要求1所述的方法,其特征在于,根据所述特征数据以及所述终端的决策机制,生成动作指令,包括:
    分析所述特征数据,得到输出结果;
    确定所述特征数据对应的决策机制;
    根据所述决策机制,确定所述输出结果对应的动作指令。
  10. 根据权利要求1-9所述的方法,其特征在于,所述动作指令至少包括:控制所述终端发出语音、控制所述终端执行显示及控制所述终端触发某一应用的功能中的一种或多种。
  11. 根据权利要求9或10所述的方法,其特征在于,所述分析所述特征数据,得到输出结果,包括:
    对所述特征数据进行数据源识别并分类;
    对分类后的特征数据采用相应的数据处理算法进行处理,得到输出结果。
  12. 根据权利要求11所述的方法,其特征在于,对分类后的特征数据采用相应的数据处理算法进行处理,得到输出结果,包括:
    对所述生物特征数据采用生物识别算法进行要素识别,得到输出结果,所述输出结果至少包括:指纹识别结果、虹膜识别结果、脸部识别结果、生物运动状态识别结果中的一种或任意组合;
    对所述环境特征数据采用物理基础数据处理算法进行要素识别,得到输出结果,所述输出结果至少包括:环境内物体类型、尺寸、方位、材料、状态、环境温度及环境湿度识别结果中的一种或任意组合。
  13. 根据权利要求9所述的方法,其特征在于,还包括:
    对所述输出结果的数据进行数据学习及数据修正。
  14. 根据权利要求13所述的方法,其特征在于,还包括:
    将所述特征数据及经过学习和修正的输出结果进行存储;
    在执行所述动作指令之后,还包括:
    基于执行所述动作指令之后所产生的执行结果与存储的数据,生成反馈信息。
  15. 一种终端,其特征在于,所述终端包括:
    至少一个传感器,用于获取特征数据;
    处理器,用于根据所述特征数据以及所述终端的决策机制,生成动作指令;执行所述动作指令。
  16. 根据权利要求15所述终端,其特征在于,
    所述处理器具体用于,基于环境特征数据的至少一部分,控制所述终端进入目标工作模式,所述特征数据包括所述环境特征数据;或者,
    所述处理器具体用于,基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端进入目标工作模式,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  17. 根据权利要求15所述终端,其特征在于,
    所述处理器具体用于,基于环境特征数据的至少一部分,控制所述终端显示当前运动画面,所述特征数据包括所述环境特征数据;或者,
    所述处理器具体用于,基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端显示当前运动画面,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  18. 根据权利要求15所述终端,其特征在于,
    所述处理器具体用于,基于环境特征数据的至少一部分,控制所述终端提示路面驾驶信息,所述特征数据包括所述环境特征数据;或者,
    所述处理器具体用于,基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端提示路面驾驶信息,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  19. 根据权利要求15所述终端,其特征在于,
    所述处理器具体用于,至少根据所述特征数据中的温度数据、湿度数据、图像数据、图像深度数据、方向数据及位置数据,获得环境状态信息,所述环境状态信息包括:环境中物体要素信息及舒适度信息;根据所述环境中物体要素信息及舒适度信息,控制所述终端提示环境相关信息。
  20. 根据权利要求15所述终端,其特征在于,
    所述处理器具体用于,至少根据所述特征数据中的肌肉运动状态数据、脑电波数据、脸部的图像数据,获得生物状态信息,所述生物状态信息至少包括:生物运动状态信息、生物情感状态信息;根据所述生物运动状态信息及生物情感状态信息,控制 所述终端显示生物特征相关信息。
  21. 根据权利要求15-20任一所述的终端,其特征在于,所述至少一个传感器还获取了用户的生物特征数据;
    所述处理器还用于基于所述生物特征数据,对用户进行身份识别;
    所述处理器具体用于,若用户身份认证通过,执行所述动作指令。
  22. 根据权利要求16-21任一所述终端,其特征在于,所述处理器还用于执行如下至少一个动作:
    控制所述终端为用户提示生活信息;
    控制所述终端为用户提示工作信息;
    控制所述终端为用户提示好友上线信息;
    控制所述终端为用户提示数据连接信息;
    控制所述终端为用户提示游戏进程信息;
    控制所述终端为用户提示路况信息;
    控制所述终端为用户提示天气信息;
    控制所述终端提示用户休息;
    控制所述终端展示根据所述环境特征数据生成的三维虚拟场景。
  23. 根据权利要求15所述终端,其特征在于,所述处理器具体用于,分析所述特征数据,得到输出结果;确定所述特征数据对应的决策机制;根据所述决策机制,确定所述输出结果对应的动作指令。
  24. 根据权利要求15-23任一所述的方法,其特征在于,所述动作指令至少包括:控制所述终端发出语音、控制所述终端执行显示及控制所述终端触发某一应用的功能中的一种或多种。
  25. 根据权利要求23或24所述的方法,所述处理器具体用于对所述特征数据进行数据源识别并分类;对分类后的特征数据采用相应的数据处理算法进行处理,得到输出结果。
  26. 根据权利要求25所述的方法,其特征在于,所述处理器具体用于对所述生物特征数据采用生物识别算法进行要素识别,得到输出结果,所述输出结果至少包括:指纹识别结果、虹膜识别结果、脸部识别结果、生物运动状态识别结果中的一种或任意组合;对所述环境特征数据采用物理基础数据处理算法进行要素识别,得到输出结果,所述输出结果至少包括:环境内物体类型、尺寸、方位、材料、状态、环境温度及环境湿度识别结果中的一种或任意组合。
  27. 根据权利要求23所述的终端,其特征在于,所述处理器还用于,对所述输出结果的数据进行数据学习及数据修正。
  28. 根据权利要求27所述的终端,其特征在于,所述处理器还用于,将所述特征数据及经过学习和修正的输出结果进行存储;并在执行所述动作指令之后,基于执行所述动作指令之后所产生的执行结果与存储的数据,生成反馈信息。
  29. 一种装置,其特征在于,所述装置包括:
    获取单元,用于通过至少一个传感器获取特征数据,所述特征数据为所述终端通 过所述至少一个传感器采集到的数据;
    生成单元,用于根据所述特征数据以及所述终端的决策机制,生成动作指令;
    执行单元,用于执行所述动作指令。
  30. 根据权利要求29所述的装置,其特征在于,所述生成单元具体用于基于环境特征数据的至少一部分,控制所述终端进入目标工作模式,所述特征数据包括所述环境特征数据;或者,
    基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端进入目标工作模式,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  31. 根据权利要求29所述的装置,其特征在于,所述生成单元具体用于基于环境特征数据的至少一部分,控制所述终端显示当前运动画面,所述特征数据包括所述环境特征数据;或者,
    基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端显示当前运动画面,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  32. 根据权利要求29所述的装置,其特征在于,所述生成单元具体用于基于环境特征数据的至少一部分,控制所述终端提示路面驾驶信息,所述特征数据包括所述环境特征数据;或者,
    基于环境特征数据的至少一部分和生物特征数据的至少一部分,控制所述终端提示路面驾驶信息,所述特征数据至少包括用户的所述生物特征数据及所述环境特征数据。
  33. 根据权利要求29所述的装置,其特征在于,所述生成单元具体用于至少根据所述特征数据中的温度数据、湿度数据、图像数据、图像深度数据、方向数据及位置数据,获得环境状态信息,所述环境状态信息包括:环境中物体要素信息及舒适度信息;
    根据所述环境中物体要素信息及舒适度信息,控制所述终端提示环境相关信息。
  34. 根据权利要求29所述的装置,其特征在于,所述生成单元具体用于至少根据所述特征数据中的肌肉运动状态数据、脑电波数据、脸部的图像数据,获得生物状态信息,所述生物状态信息至少包括:生物运动状态信息、生物情感状态信息;
    根据所述生物运动状态信息及生物情感状态信息,控制所述终端显示生物特征相关信息。
  35. 根据权利要求29-34任一所述的装置,其特征在于,所述装置还包括:
    识别单元,用于基于所述生物特征数据,对用户进行身份识别;
    所述执行单元,具体用于若用户身份认证通过,执行所述动作指令。
  36. 根据权利要求30-35任一所述的装置,其特征在于,所述装置还包括控制单元,所述控制单元用于执行如下步骤中的至少一个:
    控制所述终端为用户提示生活信息;
    控制所述终端为用户提示工作信息;
    控制所述终端为用户提示好友上线信息;
    控制所述终端为用户提示数据连接信息;
    控制所述终端为用户提示游戏进程信息;
    控制所述终端为用户提示路况信息;
    控制所述终端为用户提示天气信息;
    控制所述终端提示用户休息;
    控制所述终端展示根据所述环境特征数据生成的三维虚拟场景。
  37. 根据权利要求29所述的装置,其特征在于,所述生成单元包括:
    分析单元,用于分析所述特征数据,得到输出结果;
    第一确定单元,用于确定所述特征数据对应的决策机制;
    第二确定单元,用于根据所述决策机制,确定所述输出结果对应的动作指令。
  38. 根据权利要求29-37任一所述的装置,其特征在于,所述动作指令至少包括:控制所述终端发出语音、控制所述终端执行显示及控制所述终端触发某一应用的功能中的一种或多种。
  39. 根据权利要求37或38所述的装置,其特征在于,所述分析单元具体用于对所述特征数据进行数据源识别并分类;对分类后的特征数据采用相应的数据处理算法进行处理,得到输出结果。
  40. 根据权利要求39所述的装置,其特征在于,所述分析单元具体用于对所述生物特征数据采用生物识别算法进行要素识别,得到输出结果,所述输出结果至少包括:指纹识别结果、虹膜识别结果、脸部识别结果、生物运动状态识别结果中的一种或任意组合;对所述环境特征数据采用物理基础数据处理算法进行要素识别,得到输出结果,所述输出结果至少包括:环境内物体类型、尺寸、方位、材料、状态、环境温度及环境湿度识别结果中的一种或任意组合。
  41. 根据权利要求37所述的装置,其特征在于,所述装置还包括:修正单元,用于对所述输出结果的数据进行数据学习及数据修正。
  42. 根据权利要求41所述的装置,其特征在于,所述装置还包括:
    存储单元,用于将所述特征数据及经过学习和修正的输出结果进行存储;
    反馈单元,用于在执行单元执行所述动作指令之后,基于执行所述动作指令之后所产生的执行结果与存储的数据,生成反馈信息。
  43. 一种终端,其特征在于,包括:一个或多个处理器、存储器;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,所述终端执行如权利要求1-14中任一项所述的控制方法。
  44. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在终端上运行时,使得所述终端执行如权利要求1-14中任一项所述的显示方法。
  45. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-14中任一项所述的显示方法。
PCT/CN2017/108458 2017-03-21 2017-10-31 一种控制方法、终端及*** WO2018171196A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/496,265 US11562271B2 (en) 2017-03-21 2017-10-31 Control method, terminal, and system using environmental feature data and biological feature data to display a current movement picture
CN201780088792.4A CN110446996A (zh) 2017-03-21 2017-10-31 一种控制方法、终端及***

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710171128 2017-03-21
CN201710171128.8 2017-03-21

Publications (1)

Publication Number Publication Date
WO2018171196A1 true WO2018171196A1 (zh) 2018-09-27

Family

ID=63584144

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108458 WO2018171196A1 (zh) 2017-03-21 2017-10-31 一种控制方法、终端及***

Country Status (3)

Country Link
US (1) US11562271B2 (zh)
CN (1) CN110446996A (zh)
WO (1) WO2018171196A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111267099B (zh) * 2020-02-24 2023-02-28 东南大学 基于虚拟现实的陪护机器控制***
CN111596758A (zh) * 2020-04-07 2020-08-28 延锋伟世通电子科技(上海)有限公司 人机交互方法、***、存储介质及终端
CN114549739B (zh) * 2022-01-12 2023-05-02 江阴小象互动游戏有限公司 一种基于三维数据模型的控制***及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826146A (zh) * 2012-10-19 2014-05-28 三星电子株式会社 显示设备、控制显示设备的遥控设备及其方法
CN105556581A (zh) * 2013-10-25 2016-05-04 英特尔公司 对车载环境条件做出响应
CN105807913A (zh) * 2015-01-19 2016-07-27 三星电子株式会社 基于生物信息的可穿戴装置、***和操作方法
CN105929942A (zh) * 2015-02-27 2016-09-07 意美森公司 基于用户情绪产生动作

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5462504A (en) * 1994-02-04 1995-10-31 True Fitness Technology Inc. Fitness apparatus with heart rate control system and method of operation
JP3580519B2 (ja) * 1997-08-08 2004-10-27 株式会社ハドソン 運動用補助計器
US6746371B1 (en) * 2000-04-28 2004-06-08 International Business Machines Corporation Managing fitness activity across diverse exercise machines utilizing a portable computer system
JP3979351B2 (ja) * 2003-06-30 2007-09-19 ソニー株式会社 通信装置及び通信方法
US6902513B1 (en) * 2002-04-02 2005-06-07 Mcclure Daniel R. Interactive fitness equipment
US8109858B2 (en) * 2004-07-28 2012-02-07 William G Redmann Device and method for exercise prescription, detection of successful performance, and provision of reward therefore
US8047915B2 (en) * 2006-01-11 2011-11-01 Lyle Corporate Development, Inc. Character for computer game and method
US8845496B2 (en) * 2006-03-29 2014-09-30 Nokia Corporation System and method for gaming
CN101510074B (zh) 2009-02-27 2010-12-08 河北大学 一种高临场感智能感知交互运动***及实现方法
CN101788848B (zh) 2009-09-29 2012-05-23 北京科技大学 用于视线追踪***的眼部特征参数检测方法
US8694899B2 (en) * 2010-06-01 2014-04-08 Apple Inc. Avatars reflecting user states
JP2013208315A (ja) * 2012-03-30 2013-10-10 Sony Corp 情報処理装置、情報処理方法、及びプログラム
US10155168B2 (en) * 2012-05-08 2018-12-18 Snap Inc. System and method for adaptable avatars
US9824601B2 (en) * 2012-06-12 2017-11-21 Dassault Systemes Symbiotic helper
US20140004948A1 (en) * 2012-06-28 2014-01-02 Oliver (Lake) Watkins, JR. Systems and Method for Capture and Use of Player Emotive State in Gameplay
US9345404B2 (en) * 2013-03-04 2016-05-24 Hello Inc. Mobile device that monitors an individuals activities, behaviors, habits or health parameters
US9704209B2 (en) * 2013-03-04 2017-07-11 Hello Inc. Monitoring system and device with sensors and user profiles based on biometric user information
US20160220198A1 (en) * 2013-06-21 2016-08-04 Hello Inc. Mobile device that monitors an individuals activities, behaviors, habits or health parameters
US20150079563A1 (en) * 2013-09-17 2015-03-19 Sony Corporation Nonverbal audio cues during physical activity
US10852838B2 (en) * 2014-06-14 2020-12-01 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
CN104298722B (zh) * 2014-09-24 2018-01-19 张鸿勋 多媒体交互***及其方法
US10300394B1 (en) * 2015-06-05 2019-05-28 Amazon Technologies, Inc. Spectator audio analysis in online gaming environments
KR102381687B1 (ko) * 2015-07-30 2022-03-31 인텔 코포레이션 감정 증강형 아바타 애니메이션
US9880735B2 (en) * 2015-08-10 2018-01-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
CN105759654A (zh) 2016-04-13 2016-07-13 四川银沙科技有限公司 穿戴式智能交互***
WO2018045553A1 (zh) * 2016-09-09 2018-03-15 上海海知智能科技有限公司 人机交互的***及方法
DK179471B1 (en) * 2016-09-23 2018-11-26 Apple Inc. IMAGE DATA FOR ENHANCED USER INTERACTIONS
JP7140138B2 (ja) * 2017-10-27 2022-09-21 ソニーグループ株式会社 情報処理装置および情報処理方法、プログラム、並びに情報処理システム
US20200019242A1 (en) * 2018-07-12 2020-01-16 Microsoft Technology Licensing, Llc Digital personal expression via wearable device
US11103773B2 (en) * 2018-07-27 2021-08-31 Yogesh Rathod Displaying virtual objects based on recognition of real world object and identification of real world object associated location or geofence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103826146A (zh) * 2012-10-19 2014-05-28 三星电子株式会社 显示设备、控制显示设备的遥控设备及其方法
CN105556581A (zh) * 2013-10-25 2016-05-04 英特尔公司 对车载环境条件做出响应
CN105807913A (zh) * 2015-01-19 2016-07-27 三星电子株式会社 基于生物信息的可穿戴装置、***和操作方法
CN105929942A (zh) * 2015-02-27 2016-09-07 意美森公司 基于用户情绪产生动作

Also Published As

Publication number Publication date
US11562271B2 (en) 2023-01-24
CN110446996A (zh) 2019-11-12
US20200034729A1 (en) 2020-01-30

Similar Documents

Publication Publication Date Title
US10446145B2 (en) Question and answer processing method and electronic device for supporting the same
WO2020233464A1 (zh) 模型训练方法、装置、存储介质及设备
US20230076281A1 (en) Generating collectible items based on location information
CN109918975A (zh) 一种增强现实的处理方法、对象识别的方法及终端
CN105431813B (zh) 基于生物计量身份归属用户动作
CN102918518B (zh) 基于云的个人特征简档数据
CN108307037A (zh) 终端控制方法、终端及计算机可读存储介质
CN110168586A (zh) 情境生成和对定制的媒体内容的选择
US11934643B2 (en) Analyzing augmented reality content item usage data
CN108182626A (zh) 服务推送方法、信息采集终端及计算机可读存储介质
WO2018171196A1 (zh) 一种控制方法、终端及***
US20220101361A1 (en) Augmented reality content items to track user activity and redeem promotions
US20230120037A1 (en) True size eyewear in real time
WO2022212174A1 (en) Interface with haptic and audio feedback response
EP4315002A1 (en) Interface with haptic and audio feedback response
US20230289560A1 (en) Machine learning techniques to predict content actions
EP4314999A1 (en) User-defined contextual spaces
WO2022146799A1 (en) Compressing image-to-image models
CN117099158A (zh) 用于改变声音的特性的神经网络
CN113050792A (zh) 虚拟对象的控制方法、装置、终端设备及存储介质
US11922587B2 (en) Dynamic augmented reality experience
CN112911356B (zh) 一种虚拟现实vr视频的播放方法及相关设备
US20230344728A1 (en) Augmented reality experience event metrics system
WO2023244579A1 (en) Virtual remote tele-physical examination systems
EP4341784A1 (en) Automatic media capture using biometric sensor data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17901855

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17901855

Country of ref document: EP

Kind code of ref document: A1