CN110069133A - Demo system control method and control system based on gesture identification - Google Patents

Demo system control method and control system based on gesture identification Download PDF

Info

Publication number
CN110069133A
CN110069133A CN201910249048.9A CN201910249048A CN110069133A CN 110069133 A CN110069133 A CN 110069133A CN 201910249048 A CN201910249048 A CN 201910249048A CN 110069133 A CN110069133 A CN 110069133A
Authority
CN
China
Prior art keywords
palm
gesture
user
axis
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910249048.9A
Other languages
Chinese (zh)
Inventor
刘嵩
邱达
向长城
陈世强
刘宏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Yizhixun Information Technology Co Ltd
Hubei University for Nationalities
Original Assignee
Hubei Yizhixun Information Technology Co Ltd
Hubei University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Yizhixun Information Technology Co Ltd, Hubei University for Nationalities filed Critical Hubei Yizhixun Information Technology Co Ltd
Priority to CN201910249048.9A priority Critical patent/CN110069133A/en
Publication of CN110069133A publication Critical patent/CN110069133A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention proposes a kind of demo system control method and system based on gesture identification, method includes the following steps: acquisition user gesture video image;The video image is handled, establishes cartesian space coordinate system using the vertebra of user and both shoulders intersection point as origin, using the both shoulders of user as Y-axis, vertebra is Z axis, and the moving direction of user is X-axis;The joint characteristic point of the palm of user in frame image every in video is extracted;Section is extracted and be linked into the joint characteristic point of arm in every frame image, the position coordinates of palm in every frame image are determined in the cartesian space coordinate system of foundation;The movement of palm is tracked, describes motion profile in cartesian space coordinate system, establishes motion vector;Demo system page turning is controlled according to the motion vector of palm.This method is overcome the coordinate system for the fixation established in the prior art according to camera and is made the disadvantage that flexibility is not high, accuracy is inadequate using user's own bodies as referential.

Description

Demo system control method and control system based on gesture identification
Technical field
The present invention relates to computer fields, and in particular to a kind of demo system control method and control based on gesture identification System.
Background technique
It is universal with computer and projector, it is often demonstrated using projector under the environment such as office, education.And it deposits now With projector synchronize the operational instrument that uses there is certain limitation, firstly, it is confined to single line timing, Zhi Nengjin Row front and back page turning;Secondly, needing to be operated with laser pen far from computer control, mode of operation is not convenient, to operating distance Also it there are certain requirements, and the carrying of laser pen and inconvenient, be easy to lose, in addition laser has certain harm to human body, be doomed It cannot be by use permanent extensively.In teaching, traditional multimedia equipment such as mouse, keyboard has limited to the movable model of educator It encloses, teaching cannot be implemented in any position in classroom so as to cause educator.Currently, some tutoring systems have applied to Somatosensory device controls the page turning of PPT on projector using the movement of somatosensory device identification gesture, but somatosensory device is easy Be protected from environmental so that desired effect is not achieved because of disturbing factor in utilization, if while educator want Each orientation in classroom can control PowerPoint, cannot be accomplished using these by the somatosensory recognition equipment of regional effect.
Summary of the invention
In order to overcome above-mentioned defect existing in the prior art, the object of the present invention is to provide a kind of based on gesture identification Demo system control method and control system.
In order to realize above-mentioned purpose of the invention, the demo system controlling party based on gesture identification that the present invention provides a kind of Method, comprising the following steps:
S1 acquires user gesture video image;
S2 handles the video image, specific as follows:
S2-1 establishes coordinate system: establish cartesian space coordinate system using the vertebra of user and both shoulders intersection point as origin, with The both shoulders at family are Y-axis, and vertebra is Z axis, and the moving direction of user is X-axis;
S2-2 extracts the joint characteristic point of the palm of the user in frame image every in video;
S2-3 extracts and is linked into section to the joint characteristic point of arm in every frame image, in the cartesian space of foundation The position coordinates of palm in every frame image are determined in coordinate system;
Specifically: first, calculate arm and X, Y, Z axis angle;The second, palm is calculated to the distance of opposite X, Y, Z axis; Third finally using the angle and distance obtains the position coordinates of palm, and saves;
S2-4 tracks the movement of palm, describes motion profile in cartesian space coordinate system, establishes movement arrow Amount;
S3 controls demo system page turning according to the motion vector of palm.
This method establishes coordinate system with user oneself itself, using user's own bodies as referential, overcomes existing Make the disadvantage that flexibility is not high, accuracy is inadequate according to the coordinate system of the fixation of camera foundation in technology.
Further, before establishing coordinate system, resulting gesture video image will be acquired and carried out every frame extraction process.This Operand is reduced, the efficiency of operation is improved.
Further, the step S2 further includes step S2-5, according to the palm of the user in frame image every in video Open fist state and the state of clenching fist of the coordinate pair palm of joint characteristic point are identified;
When to open fist state, page turning mode, i.e. execution step S3 are maintained;
When for clench fist state when, execute step S4: entering cursor control mode, cursor follows the motion profile of the palm It is moved.Further, when two palms are in that the state of clenching fist enters target and chooses mode, remain motionless on the other hand, the other hand It is moved, object content is chosen according to the motion profile of mobile hand;When any palm is in palmate state is opened, maintain Page turning mode, and page turning is controlled according to the motion vector for the palm for opening fist state.Overcoming traditional can only carry out front and back page turning The shortcomings that, increase the function that cursor is chosen with amiable target.
Further, when the motion vector direction of palm includes from the negative sense of Y-axis to forward direction, control unit control is upward Page turning;When the motion vector direction of palm includes from the forward direction of Y-axis to negative sense, control unit controls downward page turning.
Preferably, before establishing coordinate system, the face in video is identified, finds correct user, then to this User establishes coordinate system.
The invention also provides a kind of Demonstration Control Systems based on gesture identification, including control unit, further include gesture Acquisition module, gesture recognition module and display unit, the gesture acquisition module acquire user gesture video image, output end It is connected to the gesture recognition module, the gesture recognition module connects described control unit, and described control unit and display are single Member connection;
The gesture recognition module includes image zooming-out module, motion tracking module, image processing module and gesture analysis Module;
Described image extraction module extracts the joint characteristic point of the palm of the user in video in a frame image;And To being extracted in the frame image to the joint characteristic point of arm in video;
The motion tracking module carries out motion tracking to palm;
Described image processing module will acquire resulting gesture video image and carry out every frame extraction process;
The gesture analysis module identifies gesture by the above-mentioned demo system control method based on gesture identification Analysis;
Described control unit controls display unit display analysis result according to gesture analysis result.
Further, the gesture recognition module further includes face identification unit.
The beneficial effects of the present invention are: this method and system can be widely applied under the environment such as teaching, meeting, there is cost Advantage cheap, deployment is convenient.Demonstrator can control PowerPoint in each orientation in room, by the shadow of room complex environment Sound is small.
Additional aspect and advantage of the invention will be set forth in part in the description, and will partially become from the following description Obviously, or practice through the invention is recognized.
Detailed description of the invention
Above-mentioned and/or additional aspect of the invention and advantage will become from the description of the embodiment in conjunction with the following figures Obviously and it is readily appreciated that, in which:
Fig. 1 is flow chart of the invention;
Fig. 2 is structural block diagram of the invention.
Specific embodiment
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, and for explaining only the invention, and is not considered as limiting the invention.
In the description of the present invention, unless otherwise specified and limited, it should be noted that term " installation ", " connected ", " connection " shall be understood in a broad sense, for example, it may be mechanical connection or electrical connection, the connection being also possible to inside two elements can , can also indirectly connected through an intermediary, for the ordinary skill in the art to be to be connected directly, it can basis Concrete condition understands the concrete meaning of above-mentioned term.
As shown in Figure 1, the present invention provides a kind of demo system control method based on gesture identification, including following step It is rapid:
S1 acquires user gesture video image.
S2 handles the video image, specific as follows:
S2-1 establishes coordinate system: establish cartesian space coordinate system using the vertebra of user and both shoulders intersection point as origin, with The both shoulders at family are Y-axis, and vertebra is Z axis, and the moving direction of user is X-axis;
Such as: setting user is the negative semiaxis of Y-axis horizontally to the right, using a left side as the positive axis of Y-axis;With forward for X-axis just Semiaxis, backward for the negative semiaxis of X-axis;With straight up for the positive axis of Z axis, straight down for the negative semiaxis of Z axis.
S2-2 extracts the joint characteristic point of the palm of the user in frame image every in video.
S2-3 extracts and is linked into section to the joint characteristic point of arm in every frame image, in the cartesian space of foundation The position coordinates of palm in every frame image are determined in coordinate system.When the position to palm positions, with the seat of the centre of the palm It is denoted as the position coordinates for palm.
Specifically: first, calculate arm and X, Y, Z axis angle;The second, palm is calculated to the distance of opposite X, Y, Z axis; Third finally using the angle and distance obtains the position coordinates of palm, and saves.
S2-4 carries out motion tracking to palm, describes motion profile in cartesian space coordinate system, establishes movement arrow Amount.
Specific method is preferably but not limited to are as follows: the movement of opponent's metacarpal joint characteristic point and arm joint characteristic point carry out with Track describes palm joint characteristic point and arm joint characteristic point motion profile in cartesian space coordinate system, establishes palm pass The motion vector for saving characteristic point and arm joint characteristic point, finally by the movement of palm joint characteristic point and arm joint characteristic point Vector carries out Vector Fusion, obtains the motion vector of palm, and save its motion feature matrix.
Also automatic video frequency tracking module can be directly used, motion tracking is carried out to palm.
S2-5, by the coordinate of the joint characteristic point of the palm of the user in frame image every in the video of acquisition in convolutional Neural It is trained under network, hand shape is judged according to the relativeness of joint characteristic point each on palm, and save its eigenmatrix, with The shape for the hand for facilitating difference user different.It can also judge that each joint is special according to the coordinate of each joint characteristic point of acquisition The distance between sign point, especially the distance between finger, are arranged threshold value, when the distance between each joint characteristic point is less than When the threshold value, then it is judged as and clenches fist;When the distance between each joint characteristic point is greater than the threshold value, then it is judged as out the palm.
When to open fist state, page turning mode, i.e. execution step S3 are maintained, demonstration system is controlled according to the motion vector of palm System page turning.
When the motion vector direction of palm includes from the negative sense of Y-axis to forward direction, control unit control is paged up;Work as hand When the motion vector direction of the palm includes from the forward direction of Y-axis to negative sense, control unit controls downward page turning.
When for clench fist state when, execute step S4: entering cursor control mode, cursor follows the motion profile of the palm It is moved.
Specifically, remaining motionless on the other hand, the other hand is moved when two palms are in that the state of clenching fist enters target and chooses mode It is dynamic, object content is chosen according to the motion profile of mobile hand, when two hands all move or only collect a hand and For clench fist state when, be invalid action;When any palm is in palmate state is opened, page turning mode is maintained, and according to opening fist state The motion vector control page turning of palm judges that the motion vector direction of two palms is if two palms are all out palmate state It is no consistent, page turning is then unanimously executed according to motion vector direction;If it is inconsistent, zooming in or out processing to the page.
Invalid identification is considered as to human body waist gesture identification below, arm is avoided to cause when naturally drooping to gesture identification Interference.
Before establishing coordinate system, resulting gesture video image will be acquired and carried out every frame extraction process, for example, will camera shooting The 30 frame pictures that head extracts are extracted into the picture streams of 15 frames, reduce the operand of computer, improve the efficiency of operation.
Before establishing coordinate system, the face in video is identified, finds correct user, then establish to the user Coordinate system.Other people gesture is avoided to generate interference to gesture recognition result.
Positional relationship in the present embodiment be all in the coordinate system established for.
The invention also provides a kind of Demonstration Control Systems based on gesture identification, including control unit, further include gesture Acquisition module, gesture recognition module and display unit, the gesture acquisition module acquire user gesture video image, output end It is connected to the gesture recognition module, the gesture recognition module connects described control unit, and described control unit and display are single Member connection.
The gesture recognition module includes image zooming-out module, motion tracking module, image processing module and gesture analysis Module.
Wherein, described image extraction module mentions the joint characteristic point of the palm of the user in video in a frame image It takes;And to being extracted in the frame image to the joint characteristic point of arm in video.
The motion tracking module carries out motion tracking to palm.
Described image processing module will acquire resulting gesture video image and carry out every frame extraction process.
The gesture analysis module identifies gesture by the above-mentioned demo system control method based on gesture identification Analysis.
Described control unit controls display unit display analysis result according to gesture analysis result.
To avoid other people gesture from generating interference to gesture recognition result, the gesture recognition module further includes that face is known Other unit only carries out gesture identification to by the human body after recognition of face.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not Centainly refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any One or more embodiment or examples in can be combined in any suitable manner.
Although an embodiment of the present invention has been shown and described, it will be understood by those skilled in the art that: not A variety of change, modification, replacement and modification can be carried out to these embodiments in the case where being detached from the principle of the present invention and objective, this The range of invention is defined by the claims and their equivalents.

Claims (8)

1. a kind of demo system control method based on gesture identification, which comprises the following steps:
S1 acquires user gesture video image;
S2 handles the video image, specific as follows:
S2-1 establishes coordinate system: cartesian space coordinate system is established using the vertebra of user and both shoulders intersection point as origin, with user's Both shoulders are Y-axis, and vertebra is Z axis, and the moving direction of user is X-axis;
S2-2 extracts the joint characteristic point of the palm of the user in frame image each in video;
S2-3 extracts and is linked into section to the joint characteristic point of arm in every frame image, in the cartesian space coordinate of foundation The position coordinates of palm in every frame image are determined in system;
Specifically: first, calculate arm and X, Y, Z axis angle;The second, palm is calculated to the distance of opposite X, Y, Z axis;The Three, the position coordinates of palm finally are obtained using the angle and distance, and saved;
S2-4 tracks the movement of palm, describes motion profile in cartesian space coordinate system, establishes motion vector;
S3 controls demo system page turning according to the motion vector of palm.
2. control method according to claim 1, which is characterized in that before establishing coordinate system, resulting hand will be acquired Gesture video image is carried out every frame extraction process.
3. control method according to claim 1, which is characterized in that the step S2 further includes step S2-5, according to view Open fist state and the state of clenching fist of the coordinate pair palm of the joint characteristic point of the palm of user in frequency in every frame image are known Not;
When to open fist state, page turning mode, i.e. execution step S3 are maintained;
When for clench fist state when, execute step S4: entering cursor control mode, cursor follows the motion profile of the palm to carry out It is mobile.
4. control method according to claim 3, which is characterized in that when two palms are in that the state of clenching fist enters target and chooses Mode remains motionless on the other hand, and the other hand is moved, and is chosen according to the motion profile of mobile hand to object content;When Any palm is in maintain page turning mode, and control page turning according to the motion vector for the palm for opening fist state when opening palmate state.
5. control method according to claim 1, which is characterized in that when the motion vector direction of palm includes from Y-axis Negative sense to forward direction when, control unit control page up;When the motion vector direction of palm includes from the positive to negative sense of Y-axis When, control unit controls downward page turning.
6. control method according to claim 1, which is characterized in that before establishing coordinate system, to the face in video It is identified, finds correct user, then coordinate system is established to the user.
7. a kind of Demonstration Control System based on gesture identification, including control unit, which is characterized in that further include gesture acquisition mould Block, gesture recognition module and display unit, the gesture acquisition module acquire user gesture video image, and output end is connected to The gesture recognition module, the gesture recognition module connect described control unit, and described control unit is connect with display unit;
The gesture recognition module includes image zooming-out module, motion tracking module, image processing module and gesture analysis module;
Described image extraction module extracts the joint characteristic point of the palm of the user in video in a frame image;And to view The joint characteristic point of arm is extracted in the frame image in frequency;
The motion tracking module carries out motion tracking to palm;
Described image processing module will acquire resulting gesture video image and carry out every frame extraction process;
The gesture analysis module presses the demo system control method pair described in any one of claims 1-6 based on gesture identification Gesture carries out discriminance analysis;
Described control unit controls display unit display analysis result according to gesture analysis result.
8. the Demonstration Control System according to claim 7 based on gesture identification, which is characterized in that the gesture identification mould Block further includes face identification unit.
CN201910249048.9A 2019-03-29 2019-03-29 Demo system control method and control system based on gesture identification Pending CN110069133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910249048.9A CN110069133A (en) 2019-03-29 2019-03-29 Demo system control method and control system based on gesture identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910249048.9A CN110069133A (en) 2019-03-29 2019-03-29 Demo system control method and control system based on gesture identification

Publications (1)

Publication Number Publication Date
CN110069133A true CN110069133A (en) 2019-07-30

Family

ID=67366752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910249048.9A Pending CN110069133A (en) 2019-03-29 2019-03-29 Demo system control method and control system based on gesture identification

Country Status (1)

Country Link
CN (1) CN110069133A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191511A (en) * 2019-12-03 2020-05-22 北京联合大学 Method and system for identifying dynamic real-time behaviors of prisons
CN111589098A (en) * 2020-04-21 2020-08-28 哈尔滨拓博科技有限公司 Follow-up gesture control method and system for doll with stepping crown block
CN112509668A (en) * 2020-12-16 2021-03-16 成都翡铭科技有限公司 Method for identifying whether hand is gripping or not
CN113299132A (en) * 2021-06-08 2021-08-24 上海松鼠课堂人工智能科技有限公司 Student speech skill training method and system based on virtual reality scene
CN114327047A (en) * 2021-12-01 2022-04-12 北京小米移动软件有限公司 Device control method, device control apparatus, and storage medium
CN114637439A (en) * 2022-03-24 2022-06-17 海信视像科技股份有限公司 Display device and gesture track recognition method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955275A (en) * 2014-04-21 2014-07-30 小米科技有限责任公司 Application control method and device
CN104808788A (en) * 2015-03-18 2015-07-29 北京工业大学 Method for controlling user interfaces through non-contact gestures
US20150363044A1 (en) * 2014-06-13 2015-12-17 Shui Ho Hung Gesture identification system in tablet projector and gesture identification method thereof
CN106125928A (en) * 2016-06-24 2016-11-16 同济大学 PPT based on Kinect demonstrates aid system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955275A (en) * 2014-04-21 2014-07-30 小米科技有限责任公司 Application control method and device
US20150363044A1 (en) * 2014-06-13 2015-12-17 Shui Ho Hung Gesture identification system in tablet projector and gesture identification method thereof
CN104808788A (en) * 2015-03-18 2015-07-29 北京工业大学 Method for controlling user interfaces through non-contact gestures
CN106125928A (en) * 2016-06-24 2016-11-16 同济大学 PPT based on Kinect demonstrates aid system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱大勇: "《基于Kinect的人机互动关键技术研究及其大屏幕应用》", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111191511A (en) * 2019-12-03 2020-05-22 北京联合大学 Method and system for identifying dynamic real-time behaviors of prisons
CN111191511B (en) * 2019-12-03 2023-08-18 北京联合大学 Dynamic real-time behavior recognition method and system for prison
CN111589098A (en) * 2020-04-21 2020-08-28 哈尔滨拓博科技有限公司 Follow-up gesture control method and system for doll with stepping crown block
CN112509668A (en) * 2020-12-16 2021-03-16 成都翡铭科技有限公司 Method for identifying whether hand is gripping or not
CN113299132A (en) * 2021-06-08 2021-08-24 上海松鼠课堂人工智能科技有限公司 Student speech skill training method and system based on virtual reality scene
CN114327047A (en) * 2021-12-01 2022-04-12 北京小米移动软件有限公司 Device control method, device control apparatus, and storage medium
CN114327047B (en) * 2021-12-01 2024-04-30 北京小米移动软件有限公司 Device control method, device control apparatus, and storage medium
CN114637439A (en) * 2022-03-24 2022-06-17 海信视像科技股份有限公司 Display device and gesture track recognition method

Similar Documents

Publication Publication Date Title
CN110069133A (en) Demo system control method and control system based on gesture identification
Xu et al. Robot teaching by teleoperation based on visual interaction and extreme learning machine
Zhang et al. Visual panel: virtual mouse, keyboard and 3D controller with an ordinary piece of paper
CN104808788A (en) Method for controlling user interfaces through non-contact gestures
CN102830798A (en) Mark-free hand tracking method of single-arm robot based on Kinect
Jo et al. Chili: viewpoint control and on-video drawing for mobile video calls
Chao et al. Robotic free writing of Chinese characters via human–robot interactions
Yang et al. A study of the human-robot synchronous control system based on skeletal tracking technology
CN105677018A (en) Screen capture method and apparatus, and electronic device
Chen et al. Drawing elon musk: A robot avatar for remote manipulation
Liang et al. Robot teleoperation system based on mixed reality
Shao et al. A natural interaction method of multi-sensory channels for virtual assembly system of power transformer control cabinet
Lin et al. The implementation of augmented reality in a robotic teleoperation system
O'Hagan et al. Visual gesture interfaces for virtual environments
Park et al. LED-glove based interactions in multi-modal displays for teleconferencing
CN118098033A (en) Teaching system and method based on mixed reality technology
CN107193384B (en) Switching method of mouse and keyboard simulation behaviors based on Kinect color image
Kim et al. Vision based laser pointer interaction for flexible screens
Wu et al. Kinect-based robotic manipulation: From human hand to end-effector
CN107831952A (en) A kind of Fei Jubian projects Integral electronic touch-control blank
Chaudhary Finger-stylus for non touch-enable systems
Sato et al. Video-based tracking of user's motion for augmented desk interface
CN103699214A (en) Three-dimensional tracking and interacting method based on three-dimensional natural gestures
Yang et al. Research on a visual sensing and tracking system for distance education
Zhao et al. Intuitive robot teaching by hand guided demonstration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190730

RJ01 Rejection of invention patent application after publication