CN110442233B - Augmented reality keyboard and mouse system based on gesture interaction - Google Patents

Augmented reality keyboard and mouse system based on gesture interaction Download PDF

Info

Publication number
CN110442233B
CN110442233B CN201910524900.9A CN201910524900A CN110442233B CN 110442233 B CN110442233 B CN 110442233B CN 201910524900 A CN201910524900 A CN 201910524900A CN 110442233 B CN110442233 B CN 110442233B
Authority
CN
China
Prior art keywords
module
keyboard
mouse
mems
emg
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910524900.9A
Other languages
Chinese (zh)
Other versions
CN110442233A (en
Inventor
印二威
谢良
秦伟
鹿迎
邓宝松
闫野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center, National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Priority to CN201910524900.9A priority Critical patent/CN110442233B/en
Publication of CN110442233A publication Critical patent/CN110442233A/en
Application granted granted Critical
Publication of CN110442233B publication Critical patent/CN110442233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention provides an augmented reality keyboard and mouse system based on gesture interaction, which comprises: the system comprises an MEMS module, an EMG module, an infrared light module, an electrotactile stimulation module, an auditory feedback module, a depth image acquisition module, a visual enhancement module and a comprehensive processing module; the comprehensive processing module is respectively connected with the MEMS module, the EMG module, the infrared optical module, the electrotactile stimulation module, the auditory feedback module, the depth image acquisition module and the visual enhancement module; the hearing feedback module, the depth image acquisition module and the visual enhancement module are installed on the augmented reality glasses. Compared with the traditional keyboard and mouse, the invention can realize more natural man-machine interaction, has stronger sense of reality, and has low manufacturing cost and short period; the wearing is simple, the wearing is light, the universality is strong, and meanwhile, the device is suitable for various application scenes; compared with a single-mode interaction method, the interaction method combining the multi-mode gesture recognition and the touch and hearing feedback based on the MEMS and the EMG is more stable in performance and better in user experience.

Description

Augmented reality keyboard and mouse system based on gesture interaction
Technical Field
The invention mainly relates to the field of human-computer interaction, in particular to a gesture interaction-based augmented reality keyboard and three-dimensional mouse human-computer interface system.
Background
Augmented Reality (AR) is an interactive experience with real-world environments, where objects residing in the real world are "Augmented" by computer-generated sensory information, such as visual, auditory, and tactile. The superimposed sensory information can be supplementary to the natural environment or can be masking the natural environment, and is seamlessly interwoven with the physical world, so that people can obtain an immersive interactive experience. Augmented reality is used to enhance a natural environment or context and provide a rich-sense experience. With augmented reality technology, real-world information around a user becomes interactive and digitally manipulable.
In recent years, augmented reality technology has been rapidly developed, is gradually attracting attention of more and more researchers and high-tech companies, and has shown great application prospects in the fields of medical treatment, education, games, military affairs and the like. Augmented reality will also be considered as a next generation mainstream human interactive device that replaces PC computers and smart phones in the near future.
Because the user is often in the mobile state, and is not convenient for carry too much interactive control hardware system usually, so realize keyboard input and three-dimensional mouse control very challenging in the augmented reality, traditional keyboard, mouse have obviously can not satisfy virtual reality interactive demand, need to develop the novel interactive control system of wearing formula that has keyboard mouse function urgently, and then realize the more natural interconnection of man-machine and interdynamic.
At present, the main interaction means adopted by the augmented reality system is gesture interaction and voice interaction based on depth image information. However, the image-based method is easily interfered by natural light, so that the image-based method is very easy to lose effectiveness in an outdoor environment, and the problems that fingers are shielded and interfered with each other due to a large motion range of a human hand in a space, the surface of the hand lacks features with strong resolution, and the like, so that the image-based dynamic gesture tracking system is low in precision and easy to generate ambiguity. The voice-based interaction method has strict requirements on background noise of the environment and the cloud computing network environment, and the identification accuracy rate is greatly reduced for public places with dense personnel or environments with unstable network environments. In recent years, MEMS (Micro-Electro-Mechanical systems) acceleration sensors and EMG (electromyography) electromyography sensors gradually occupy the sensor market by virtue of the characteristics of small size, light weight, low power consumption, high reliability, high sensitivity, high integration and the like, and attract more and more attention in the field of human-computer interaction.
Disclosure of Invention
The purpose of the invention is realized by the following technical scheme.
The invention provides an augmented reality keyboard and mouse system based on gesture interaction, aiming at the problem. The system effectively fuses MEMS and EMG gesture interaction information, is assisted by infrared light mark points, provides interactive invasion by adopting touch and auditory feedback information, and further improves interaction speed and user experience in an augmented reality environment. The invention aims to realize the naturalness and operability of augmented reality control, and is a brand-new keyboard and mouse control mode.
Specifically, the invention provides an augmented reality keyboard and mouse system based on gesture interaction, which is used in cooperation with augmented reality glasses and comprises: the system comprises an MEMS module, an EMG module, an infrared light module, an electrotactile stimulation module, an auditory feedback module, a depth image acquisition module, a visual enhancement module and a comprehensive processing module; the comprehensive processing module is respectively connected with the MEMS module, the EMG module, the infrared optical module, the electrotactile stimulation module, the auditory feedback module, the depth image acquisition module and the visual enhancement module; the hearing feedback module, the depth image acquisition module and the visual enhancement module are installed on the augmented reality glasses.
Preferably, the MEMS module is composed of 6 acceleration sensors with 9 axes of the left hand and the right hand respectively, and is placed in the center positions of the 1 st or 2 nd knuckle of 5 fingers and the back of the hand respectively, and the MEMS module is used for collecting the hand movement information of a user and sending signals of the acceleration sensors with 9 axes to the comprehensive processing module.
Preferably, the EMG module is composed of a plurality of pairs of differential electromyography electrodes, is disposed at the middle position of the lower arm of the user, and is configured to acquire muscle activity information of the user and send the electromyography signals to the comprehensive processing module.
Preferably, the infrared light module consists of LED lamps positioned at the center of the back of the hand and on the index finger, and is powered by the comprehensive processing module and used for generating the virtual laser beam.
Preferably, the electric touch stimulation module is composed of electric touch stimulation electrode plates located at the finger tips of the two hands, is controlled by the comprehensive processing module, and is used for generating touch feedback information of the keyboard and the mouse keys.
Preferably, the hearing feedback module consists of bilateral speakers positioned on the glasses legs of the augmented reality glasses, is controlled by the comprehensive processing module, and is used for providing hearing feedback information of a keyboard and mouse buttons.
Preferably, the depth image acquisition module is composed of a binocular camera located at the nose bridge of the augmented reality glasses, and is used for acquiring depth maps of two light spots emitted by the infrared light module, and further solving the virtual laser beam pointing direction.
Preferably, the visual enhancement module is used for presenting keyboard input content and three-dimensional mouse control results and providing visual feedback for a user.
Preferably, the comprehensive processing module is used for receiving the multi-channel signals of the MEMS module and the EMG module, performing real-time signal processing and identification, and feeding back the identification result to the user through the electric touch stimulation module, the auditory feedback module and the visual enhancement module.
Preferably, the comprehensive processing module is made of a malleable conductive material and is respectively connected with the MEMS module, the EMG module, the infrared optical module and the electrotactile stimulation module, and is wirelessly connected with the auditory feedback module, the depth image acquisition module and the visual enhancement module.
Preferably, the processing process of the multi-channel signals of the MEMS module and the EMG module in the integrated processing module is as follows:
(1) synchronously acquiring MEMS signals and EMG signals;
(2) filtering the MEMS signal and the EMG signal;
(3) inputting the filtered MEMS and EMG signals into a time sequence convolution neural network obtained by training based on a keyboard and mouse control gesture sample database, and outputting a gesture classification result and an identification score;
(4) determining whether the identification score exceeds a threshold: if the judgment result is yes, entering the step (5); if the judgment result is negative, returning to the step (1);
(5) outputting a gesture recognition instruction, and storing the MEMS and EMG data;
(6) marking the stored MEMS and EMG data according to the user control behavior information of keyboard input and mouse keys to generate a specific user sample set;
(7) and importing the specific user sample set into a keyboard and mouse control gesture sample database, and retraining the time sequence convolutional neural network.
Preferably, the working process of the comprehensive processing module is as follows:
(1) synchronously acquiring acceleration signals and electromyographic signals and recognizing gestures based on the MEMS module and the EMG module;
(2) whether the gesture has the synchronous action of lifting up or pressing down of both hands is detected, whether the system control state is entered is judged: if the judgment result is yes, entering the step (3), and if the judgment result is no, returning to the step (1);
(3) applying primary tactile feedback to fingertips of two hands through the electric tactile stimulation module, identifying whether the hand gestures have one-hand half-fist making gestures, judging whether the two-hand keyboard input state is entered, if so, entering the step (4), and if not, entering the step (9);
(4) identifying the moving finger and the moving direction thereof, and outputting identification category information and an identification score;
(5) judging whether the key is pressed: if the judgment result is yes, entering the step (6), and if the judgment result is no, returning to the step (4);
(6) applying keyboard tap touch auditory feedback through the electric touch stimulation module and the auditory stimulation module, and displaying output text content in the visual enhancement module;
(7) judging whether the thumb is pressed according to the gesture recognition result: if the judgment result is yes, the step (8) is carried out, if the judgment result is no, the content to be input is recommended according to the probability positive sequence based on the code table of the input method, and the step (4) is returned;
(8) confirming and displaying the output text content in the visual enhancement module, and judging whether the input is finished: if the judgment result is yes, the current operation is ended, and if the judgment result is no, the step (4) is returned;
(9) recognizing a keyboard hand and a mouse hand through a gesture detection result, switching to a keyboard and mouse state, and taking over functions of the keyboard and the mouse;
(10) judging whether the current hand is a keyboard hand: if yes, further judging whether a key is pressed, if yes, sending a control instruction, synchronously applying keyboard knocking touch auditory feedback based on the electric touch stimulation module and the auditory feedback module, and entering the step (13); if the judgment result is negative, entering the step (11);
(11) judging whether to select the virtual beam: if the judgment result is yes, calculating the pointing direction of a light spot connecting line based on the depth image acquisition module, and displaying the real-time virtual laser beam pointing direction in the vision enhancement module; if the judgment result is negative, calculating three-dimensional space displacement change based on the MEMS signal, and displaying a real-time three-dimensional coordinate position in the vision enhancement module;
(12) judging whether a mouse button exists, if so, sending a control instruction, applying mouse button touch auditory feedback, and entering the step (13);
(13) and outputting the keyboard and mouse cooperative control command.
The invention has the advantages that: compared with the traditional keyboard and mouse, the augmented reality keyboard and mouse system based on gesture interaction can realize more natural human-computer interaction, has stronger sense of reality, and has low manufacturing cost and short period; the augmented reality keyboard and mouse system based on gesture interaction is simple to wear, light and strong in universality, and is suitable for various application scenes; compared with a single-mode interaction method, the interaction method combining the multi-mode gesture recognition and the touch and hearing feedback based on the MEMS and the EMG is more stable in performance and better in user experience.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a diagram of the hardware components of the system of the present invention;
FIG. 2 is a block diagram of the system of the present invention;
FIG. 3 is a signal processing and recognition flow diagram of the gesture recognition process of the present invention;
FIG. 4 is a system flow diagram of the present invention in a particular application process;
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Aiming at the technical problems in the prior art, the invention provides the gesture interaction-based augmented reality keyboard and mouse system which is simple in structure, simple and convenient to operate, capable of improving the operation speed, and capable of improving the operation reliability and accuracy.
Fig. 1 is a system hardware composition diagram of the present invention, wherein 1 is an electro-tactile stimulation electrode, 2 is an MEMS motion sensor, 3 is a comprehensive processing module, 4 is a myoelectric electrode sheet, 5 is an LED infrared light, 6 is a binocular depth camera, 7 is an augmented reality display lens, and 8 is a speaker.
Fig. 2 is a block diagram of the system according to the present invention. An augmented reality keyboard and mouse system based on gesture interaction comprises an MEMS module (namely an MEMS motion sensor 2 in figure 1), an EMG module (namely an electromyography electrode sheet 4 in figure 1), an infrared optical module (namely an LED infrared light 5 in figure 1), an electrotactile stimulation module (namely an electrotactile stimulation electrode 1 in figure 1), an auditory feedback module (namely a loudspeaker 8 in figure 1), a depth image acquisition module (namely a binocular depth camera 6 in figure 1), a visual enhancement module (namely an augmented reality display lens 7 in figure 1) and a comprehensive processing module (namely a comprehensive processing module 3 in figure 1), wherein the comprehensive processing module is respectively connected with the MEMS module, the EMG module, the infrared optical module, the electrotactile stimulation module, the auditory feedback module, the depth image acquisition module and the visual enhancement module.
The MEMS module consists of 6 acceleration sensors with 9 axes respectively arranged on the 1 st or 2 nd knuckle of 5 fingers and the center position of the back of the hand, and 12 acceleration sensors are arranged on the two hands, and the MEMS module is used for acquiring the hand motion information of a user and sending signals of the acceleration sensors with 9 axes to the comprehensive processing module;
the EMG module consists of 6-10 pairs of differential electromyography electrodes and is arranged in the middle of the lower arm of the user, and the EMG module is used for acquiring muscle activity information of the user and sending an electromyography signal to the comprehensive processing module;
the infrared light module consists of LED lamps positioned at the center of the back of the hand and on the index finger, is powered by the comprehensive processing module and is used for generating a virtual laser beam;
the electric touch stimulation module is composed of electric touch stimulation electrode plates positioned at the finger tips of the two hands, is controlled by the comprehensive processing module and is used for generating touch feedback information of keys of a keyboard and a mouse;
the auditory feedback module consists of loudspeakers on two sides of the glasses legs of the augmented reality glasses, is controlled by the comprehensive processing module and is used for providing auditory feedback information of the keyboard and the mouse buttons;
the depth image acquisition module consists of a binocular camera positioned at the nose bridge of the augmented reality glasses and is used for acquiring depth images of two light spots emitted by the infrared light module so as to solve the virtual laser beam pointing direction;
the visual enhancement module is used for presenting keyboard input content and three-dimensional mouse control results and providing visual feedback for a user;
the comprehensive processing module is used for receiving multi-channel signals of the MEMS module and the EMG module, carrying out real-time signal processing and identification, feeding back identification results to a user through the electric touch stimulation module, the auditory sense feedback module and the visual sense enhancement module, is respectively connected with the MEMS module, the EMG module, the infrared light module and the electric touch stimulation module by adopting conductive materials with ductility such as liquid metal and the like, and is connected with the auditory sense feedback module, the depth image acquisition module and the visual sense enhancement module by adopting wireless means such as WiFi/Bluetooth and the like;
as shown in fig. 3, the processing steps of the multi-channel signals of the MEMS module and the EMG module in the integrated processing module are as follows:
(1) the system starts to operate, and MEMS signals and EMG signals are synchronously collected;
(2) carrying out 50Hz ChebyshevI type IIR notch filtering on the MEMS and EMG multichannel signals, then carrying out 0.1-30Hz ChebyshevI type IIR band-pass filtering on the MEMS signals, and carrying out 0.1-70Hz ChebyshevI type IIR band-pass filtering on the EMG signals;
(3) inputting the filtered MEMS and EMG signals into a time sequence convolution neural network obtained based on the training of a keyboard and mouse control gesture sample library, and outputting a gesture classification result and an identification score;
(4) determining whether the identification score exceeds a threshold: if the judgment result is yes, entering the step (5); if the judgment result is negative, returning to the step (1);
(5) outputting a gesture recognition instruction, and storing data for expanding a sample database;
(6) marking the stored MEMS and EMG data according to the user control behavior information of keyboard input and mouse keys to generate a new sample set of a specific user;
(7) and importing the newly obtained sample set into a keyboard and mouse control gesture sample database, and retraining the time sequence convolutional neural network.
As shown in fig. 4, the control logic steps of the integrated processing module are as follows:
(1) the system starts to operate, and synchronous acquisition and gesture recognition of acceleration signals and electromyographic signals are synchronously performed based on the MEMS module and the EMG module;
(2) whether the gesture has the synchronous uplifting and pushing action of both hands is detected, whether the system control state is entered is judged: if the judgment result is yes, entering the step (3), and if the judgment result is no, returning to the step (1);
(3) applying primary tactile feedback to fingertips of two hands through the electric tactile stimulation module, identifying whether the hand gestures have one-hand half-fist making gestures, judging whether the two-hand keyboard input state is entered, if so, entering the step (4), and if not, entering the step (9);
(4) identifying the moving finger and the moving direction thereof by adopting a time sequence convolution neural network method, and outputting identification category information and an identification score;
(5) judging whether the key is pressed: if the judgment result is yes, entering the step (6), and if the judgment result is no, returning to the step (4);
(6) the system applies keyboard tapping auditory feedback through the electric tactile stimulation module and the auditory stimulation module and displays output text content in the visual enhancement module;
(7) judging whether the thumb is pressed according to the gesture recognition result: if the judgment result is yes, the step (8) is carried out, if the judgment result is no, the content to be input is recommended according to the probability positive sequence based on the code table of the input method, and the step (4) is returned;
(8) confirming and displaying the output text content in the visual enhancement module, and judging whether the input is finished: if the judgment result is yes, the current operation is ended, and if the judgment result is no, the step (4) is returned;
(9) judging a keyboard hand and a mouse hand according to the gesture detection result, switching to a keyboard and mouse state, and taking over the functions of the keyboard and the mouse;
(10) judging whether the current hand is a keyboard hand: if yes, further judging whether a key is pressed, if yes, sending a control instruction by the system, and synchronously applying keyboard knocking touch auditory feedback based on the electric touch stimulation module and the auditory feedback module; if the judgment result is negative, entering the step (11);
(11) judging whether to select the virtual beam: if the judgment result is yes, calculating the pointing direction of a light spot connecting line based on a binocular camera, displaying the pointing direction of a real-time virtual laser beam in the vision enhancement module, and if the judgment result is no, calculating the three-dimensional space displacement change based on the MEMS signal, and displaying the real-time three-dimensional coordinate position in the vision enhancement module;
(12) judging whether a mouse button exists, if so, sending a control instruction, and applying mouse button touch auditory feedback;
(13) outputting a keyboard and mouse cooperative control command;
(14) and (6) ending.
Compared with the traditional keyboard and mouse, the augmented reality keyboard and mouse system based on gesture interaction can realize more natural human-computer interaction, has stronger sense of reality, and has low manufacturing cost and short period; the augmented reality keyboard and mouse system based on gesture interaction is simple to wear, light and strong in universality, and is suitable for various application scenes; compared with a single-mode interaction method, the interaction method combining the multi-mode gesture recognition and the touch and hearing feedback based on the MEMS and the EMG is more stable in performance and better in user experience.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (9)

1. An augmented reality keymouse system based on gesture interaction for use with augmented reality glasses, comprising:
the system comprises an MEMS module, an EMG module, an infrared light module, an electrotactile stimulation module, an auditory feedback module, a depth image acquisition module, a visual enhancement module and a comprehensive processing module;
the comprehensive processing module is respectively connected with the MEMS module, the EMG module, the infrared optical module, the electrotactile stimulation module, the auditory feedback module, the depth image acquisition module and the visual enhancement module;
the auditory feedback module, the depth image acquisition module and the visual enhancement module are arranged on the augmented reality glasses;
the depth image acquisition module is used for acquiring depth maps of two light spots emitted by the infrared optical module so as to solve the virtual laser beam direction;
the MEMS module consists of 6 acceleration sensors with 9 axes of the left hand and the right hand respectively, and is respectively placed at the 1 st or 2 nd knuckle of 5 fingers and the center position of the back of the hand, and the MEMS module is used for collecting the hand movement information of a user and sending signals of the acceleration sensors with 9 axes to the comprehensive processing module; the EMG module consists of a plurality of pairs of differential electromyography electrodes and is arranged in the middle of the lower arm of the user, and the EMG module is used for acquiring muscle activity information of the user and sending electromyography signals to the comprehensive processing module;
the comprehensive processing module is used for receiving the multi-channel signals of the MEMS module and the EMG module, processing and identifying the signals in real time, and feeding back an identification result to a user through the electric touch stimulation module, the auditory feedback module and the visual enhancement module;
specifically, the comprehensive processing module is used for outputting a gesture classification result and an identification score based on a time sequence convolution neural network obtained by training a keyboard and mouse control gesture sample library according to signals of the MEMS module and the EMG module;
the comprehensive processing module is used for judging whether a detection target enters a double-hand keyboard input state or not according to the signals of the MEMS module and the EMG module;
when entering the input state of the double-hand keyboard, the mobile terminal is used for identifying the moving fingers and the moving direction thereof, and outputting identification category information and identification scores; further judging whether a key is pressed or not, if so, applying keyboard knocking touch auditory feedback by the system through the electric touch stimulation module and the auditory feedback module, and displaying output text content in the visual enhancement module;
when the keyboard input state with two hands is not entered, the keyboard input state is used for switching to a keyboard and mouse state, and simultaneously taking over the functions of the keyboard and the mouse to further judge whether the keyboard is a keyboard hand or not;
if the detected object is the keyboard hand, further judging whether the keyboard is pressed, if so, sending a control instruction by the system, and synchronously applying keyboard knocking touch auditory feedback based on the electric touch stimulation module and the auditory feedback module;
if the detected target is not the keyboard hand, further judging whether mouse button actions exist, if the mouse button actions exist, sending a control instruction by the system, and synchronously applying mouse button touch auditory feedback based on the electric touch stimulation module and the auditory feedback module.
2. The gesture interaction based augmented reality keyboard and mouse system of claim 1,
the infrared light module consists of LED lamps positioned in the center of the back of the hand and on the index finger, is powered by the comprehensive processing module and is used for generating virtual laser beams.
3. The gesture interaction based augmented reality keyboard and mouse system of claim 1,
the electric touch stimulation module is composed of electric touch stimulation electrode plates positioned at the fingertip positions of both hands, is controlled by the comprehensive processing module and is used for generating touch feedback information of keys of a keyboard and a mouse.
4. The gesture interaction based augmented reality keyboard and mouse system of claim 1,
the hearing feedback module consists of loudspeakers on two sides of the glasses legs of the augmented reality glasses, is controlled by the comprehensive processing module and is used for providing hearing feedback information of a keyboard and mouse buttons.
5. The gesture interaction based augmented reality keyboard and mouse system of claim 1,
the depth image acquisition module is composed of a binocular camera located at the nose bridge of the augmented reality glasses.
6. The gesture interaction based augmented reality keyboard and mouse system of claim 1,
the visual enhancement module is used for presenting keyboard input content and three-dimensional mouse control results and providing visual feedback for a user.
7. The gesture interaction based augmented reality keyboard and mouse system of claim 1,
the comprehensive processing module is connected with the MEMS module, the EMG module, the infrared optical module and the electrotactile stimulation module respectively by adopting a conductive material with ductility, and is connected with the auditory feedback module, the depth image acquisition module and the visual enhancement module in a wireless mode.
8. The gesture interaction based augmented reality keyboard and mouse system according to any one of claims 1-7,
the processing process of the multi-channel signals of the MEMS module and the EMG module in the comprehensive processing module is as follows:
(1) synchronously acquiring MEMS signals and EMG signals;
(2) filtering the MEMS signal and the EMG signal;
(3) inputting the filtered MEMS and EMG signals into a time sequence convolution neural network obtained by training based on a keyboard and mouse control gesture sample database, and outputting a gesture classification result and an identification score;
(4) determining whether the identification score exceeds a threshold: if the judgment result is yes, entering the step (5); if the judgment result is negative, returning to the step (1);
(5) outputting a gesture recognition instruction, and storing the MEMS and EMG data;
(6) marking the stored MEMS and EMG data according to the user control behavior information of keyboard input and mouse keys to generate a specific user sample set;
(7) and importing the specific user sample set into a keyboard and mouse control gesture sample database, and retraining the time sequence convolutional neural network.
9. The gesture interaction based augmented reality keyboard and mouse system according to any one of claims 1-7,
the working process of the comprehensive processing module is as follows:
(1) synchronously acquiring acceleration signals and electromyographic signals and recognizing gestures based on the MEMS module and the EMG module;
(2) whether the gesture has the synchronous action of lifting up or pressing down of both hands is detected, whether the system control state is entered is judged: if the judgment result is yes, entering the step (3), and if the judgment result is no, returning to the step (1);
(3) applying primary tactile feedback to fingertips of two hands through the electric tactile stimulation module, identifying whether the hand gestures have one-hand half-fist making gestures, judging whether the two-hand keyboard input state is entered, if so, entering the step (4), and if not, entering the step (9);
(4) identifying the moving finger and the moving direction thereof, and outputting identification category information and an identification score;
(5) judging whether the key is pressed: if the judgment result is yes, entering the step (6), and if the judgment result is no, returning to the step (4);
(6) applying keyboard tap auditory feedback through the electrotactile stimulation module and the auditory feedback module, and displaying output text content in the visual enhancement module;
(7) judging whether the thumb is pressed according to the gesture recognition result: if the judgment result is yes, the step (8) is carried out, if the judgment result is no, the content to be input is recommended according to the probability positive sequence based on the code table of the input method, and the step (4) is returned;
(8) confirming and displaying the output text content in the visual enhancement module, and judging whether the input is finished: if the judgment result is yes, the current operation is ended, and if the judgment result is no, the step (4) is returned;
(9) recognizing a keyboard hand and a mouse hand through a gesture detection result, switching to a keyboard and mouse state, and taking over functions of the keyboard and the mouse;
(10) judging whether the current hand is a keyboard hand: if yes, further judging whether a key is pressed, if yes, sending a control instruction, synchronously applying keyboard knocking touch auditory feedback based on the electric touch stimulation module and the auditory feedback module, and entering the step (13); if the judgment result is negative, entering the step (11);
(11) judging whether to select the virtual beam: if the judgment result is yes, calculating the pointing direction of a light spot connecting line based on the depth image acquisition module, and displaying the real-time virtual laser beam pointing direction in the vision enhancement module; if the judgment result is negative, calculating three-dimensional space displacement change based on the MEMS signal, and displaying a real-time three-dimensional coordinate position in the vision enhancement module;
(12) judging whether a mouse button exists, if so, sending a control instruction, applying mouse button touch auditory feedback, and entering the step (13);
(13) and outputting the keyboard and mouse cooperative control command.
CN201910524900.9A 2019-06-18 2019-06-18 Augmented reality keyboard and mouse system based on gesture interaction Active CN110442233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910524900.9A CN110442233B (en) 2019-06-18 2019-06-18 Augmented reality keyboard and mouse system based on gesture interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910524900.9A CN110442233B (en) 2019-06-18 2019-06-18 Augmented reality keyboard and mouse system based on gesture interaction

Publications (2)

Publication Number Publication Date
CN110442233A CN110442233A (en) 2019-11-12
CN110442233B true CN110442233B (en) 2020-12-04

Family

ID=68429126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910524900.9A Active CN110442233B (en) 2019-06-18 2019-06-18 Augmented reality keyboard and mouse system based on gesture interaction

Country Status (1)

Country Link
CN (1) CN110442233B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111158476B (en) * 2019-12-25 2023-05-23 中国人民解放军军事科学院国防科技创新研究院 Key recognition method, system, equipment and storage medium of virtual keyboard
CN113220117B (en) * 2021-04-16 2023-12-29 邬宗秀 Device for human-computer interaction
CN113419622A (en) * 2021-05-25 2021-09-21 西北工业大学 Submarine operation instruction control system interaction method and device based on gesture operation
CN113918013B (en) * 2021-09-28 2024-04-16 天津大学 Gesture directional interaction system and method based on AR glasses
CN114265498B (en) * 2021-12-16 2023-10-27 中国电子科技集团公司第二十八研究所 Method for combining multi-mode gesture recognition and visual feedback mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294226A (en) * 2013-05-31 2013-09-11 东南大学 Virtual input device and virtual input method
CN103823551A (en) * 2013-03-17 2014-05-28 浙江大学 System and method for realizing multidimensional perception of virtual interaction
CN108460313A (en) * 2017-02-17 2018-08-28 鸿富锦精密工业(深圳)有限公司 A kind of gesture identifying device and human-computer interaction system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198130A1 (en) * 2013-01-15 2014-07-17 Immersion Corporation Augmented reality user interface with haptic feedback
CN107203272A (en) * 2017-06-23 2017-09-26 山东万腾电子科技有限公司 Wearable augmented reality task instruction system and method based on myoelectricity cognition technology
CN107943282A (en) * 2017-11-06 2018-04-20 上海念通智能科技有限公司 A kind of man-machine interactive system and method based on augmented reality and wearable device
CN108829245B (en) * 2018-05-30 2019-08-23 中国人民解放军军事科学院国防科技创新研究院 A kind of virtual sand table intersection control routine based on multi-modal brain-machine interaction technology
CN109453509A (en) * 2018-11-07 2019-03-12 龚映清 It is a kind of based on myoelectricity and motion-captured virtual upper limb control system and its method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103823551A (en) * 2013-03-17 2014-05-28 浙江大学 System and method for realizing multidimensional perception of virtual interaction
CN103294226A (en) * 2013-05-31 2013-09-11 东南大学 Virtual input device and virtual input method
CN108460313A (en) * 2017-02-17 2018-08-28 鸿富锦精密工业(深圳)有限公司 A kind of gesture identifying device and human-computer interaction system

Also Published As

Publication number Publication date
CN110442233A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110442233B (en) Augmented reality keyboard and mouse system based on gesture interaction
Yang et al. Gesture interaction in virtual reality
US11360558B2 (en) Computer systems with finger devices
CN105487673B (en) A kind of man-machine interactive system, method and device
CN103347437B (en) Gaze detection in 3D mapping environment
CN102789313B (en) User interaction system and method
CN104793731A (en) Information input method for wearable device and wearable device
CN102779000B (en) User interaction system and method
CN116097209A (en) Integration of artificial reality interaction modes
WO2017047182A1 (en) Information processing device, information processing method, and program
WO2018018624A1 (en) Gesture input method for wearable device, and wearable device
US9811170B2 (en) Wearable input device
WO2012119371A1 (en) User interaction system and method
WO2021073743A1 (en) Determining user input based on hand gestures and eye tracking
WO2019087564A1 (en) Information processing device, information processing method, and program
US20200168121A1 (en) Device for Interpretation of Digital Content for the Visually Impaired
Breslauer et al. Leap motion sensor for natural user interface
WO2019227734A1 (en) Control instruction input method and apparatus
CN111221405A (en) Gesture control method and device
CN110717993B (en) Interaction method, system and medium of split type AR glasses system
WO2003003185A1 (en) System for establishing a user interface
CN210109744U (en) Head-mounted alternating current device and head-mounted alternating current system
Fu et al. Research on application of cognitive-driven human-computer interaction
WO2018076609A1 (en) Terminal and method for operating terminal
CN104866112A (en) Non-contact interaction method based on mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant