CN101609618B - Real-time hand language communication system based on special codes - Google Patents
Real-time hand language communication system based on special codes Download PDFInfo
- Publication number
- CN101609618B CN101609618B CN2008101635548A CN200810163554A CN101609618B CN 101609618 B CN101609618 B CN 101609618B CN 2008101635548 A CN2008101635548 A CN 2008101635548A CN 200810163554 A CN200810163554 A CN 200810163554A CN 101609618 B CN101609618 B CN 101609618B
- Authority
- CN
- China
- Prior art keywords
- sign language
- space
- information
- coding
- hand
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 230000009471 action Effects 0.000 claims abstract description 19
- 230000000007 visual effect Effects 0.000 claims description 29
- 238000000034 method Methods 0.000 claims description 25
- 210000000988 bone and bone Anatomy 0.000 claims description 18
- 240000006409 Acacia auriculiformis Species 0.000 claims description 15
- 210000005069 ears Anatomy 0.000 claims description 15
- 239000000284 extract Substances 0.000 claims description 14
- 238000005452 bending Methods 0.000 claims description 7
- 238000000926 separation method Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 210000001145 finger joint Anatomy 0.000 claims description 4
- 230000036541 health Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 241000905957 Channa melasoma Species 0.000 claims description 3
- 241001269238 Data Species 0.000 claims description 3
- 238000013480 data collection Methods 0.000 claims description 3
- 210000000624 ear auricle Anatomy 0.000 claims description 3
- 230000008859 change Effects 0.000 abstract description 2
- 239000000700 radioactive tracer Substances 0.000 abstract 3
- 210000003811 finger Anatomy 0.000 description 9
- 206010011878 Deafness Diseases 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000003068 static effect Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000001932 seasonal effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 206010048865 Hypoacusis Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000002805 bone matrix Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 208000030251 communication disease Diseases 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000005224 forefinger Anatomy 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000010473 stable expression Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a real-time hand-language communication system based on special codes, comprising a pair of data gloves, a position tracer and an intelligent recognition device, wherein the data gloves are used for detecting the gesture change of a wearer; the position tracer is used for detecting a special area in which the hands of a wearer put; and the intelligent recognition device is used for recognizing the hand language according to the data gloves and the position tracer and comprises a module of converting hand-language motion information into text information, a module of converting the text information into hand-language information module and a hand-language dictionary database of establishing corresponding sequences for text information, hand-language coding information and action cartoons. The real-time hand-language communication system based on the special codes has the advantages of good rapidness, strong real-time performance and low cost.
Description
Technical field
The present invention relates to realize between a kind of deaf-mute and the normal person system that exchanges in real time, especially a kind of sign language AC system.
Background technology
Sign language is the language that the deaf-mute uses.It is to be aided with the more stable expression system of expressing one's feelings posture and constituting by the action of hand shape, is a kind of special language of communicating by action/vision.There is more than 2,000 ten thousand person hard of hearing in China, and they mainly use sign language to exchange.Because sign language is not most of people's of society a language commonly used, this has restricted they and social exchanging to a great extent.The development of sign language AC system can address this problem to a certain extent, plays a significant role for the deaf person creates the barrier-free environment aspect, and the Chinese sign language of promoting standard is had great role.
Along with the growth of society to deaf-mute love, more and more scholars expert's sign Language Recognition that begins one's study is to realize the interchange between normal person and the deaf-mute better.Present sign Language Recognition mainly is divided into based on the Sign Language Recognition of data glove with based on the sign Language Recognition of vision (image).For the identification of sign language, should adopt data glove as hand shape input equipment, and adopt position tracker to gather the motion of palm with space-time concurrency.Because compare with video camera; The data that data glove and position tracker are gathered are succinct, accurate, and these two kinds of collecting devices are prone to obtain the characteristic that shows the sign language space-time characterisation, like the finger-joint movable information; Palm movable informations etc., the data of data glove collection do not receive the influence of environmental changes such as illumination.A lot of in the world experts are devoted to the research of sign Language Recognition Method; Realized of the conversion of sign language signal to text, acoustic information; Also there are some experts to adopt electronic equipment to be converted into the sign language animation to text message, thereby realize that the people exchanges with the unidirectional sign language of machine terminal.
Yet, the communication disorder of the deaf-mute overwhelming majority be with process that the normal person exchanges in produce.If realize real-time interchange, just necessarily required to respond input-output device rapidly, also require identification and method for transformation efficiently simultaneously, can realize the mutual conversion of sign language and text message in moment.Promote in order further to realize, used equipment cost can not be too expensive.
Summary of the invention
The deficiency poor for the rapidity that overcomes existing sign Language Recognition, that real-time is poor, cost is high, the present invention provides the real-time hand language AC system based on space encoding that a kind of rapidity is good, real-time, cost is low.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of real-time hand language AC system based on space encoding; Comprise the hand fractal transform that is used to detect the wearer data glove, be used to detect position tracker and the intelligent identification device that is used for carrying out Sign Language Recognition of wearer's hand place area of space according to data glove and position tracker; Said intelligent identification device comprises that sign language motion information is converted into the sign language dictionary database that text message module, text message are converted into the sign language information module and are used for text message, sign language coded message and action animation are set up corresponding sequence, and said sign language motion information is converted into the text message module and comprises:
Signal data collection unit is used for obtaining according to the baud rate of data glove and position tracker the Frame of time period, obtains importing a series of vector datas of data;
The data pretreatment unit is used for according to data glove wearer's sign language custom the flexibility codomain being defined, and the use location tracker is divided area of space wearer's personal feature simultaneously; The location mouth; Left side ear-lobe, auris dextra hangs down, the position of left side shoulder and right shoulder;
Sign language information characteristics extraction unit is used for the data extract gesture information according to data glove input, from direction, the positional information of the extracting data hand of position tracker input, constitutes the proper vector of input sample;
Hand shape information coding unit is used for encoding according to the sign language signal of the proper vector that obtains, and obtains character string, and coding rule is following:
(4.1), each finger-joint is divided into three active states---and stretch, half crooked and crooked fully, adopt the binary coding mode that the articulations digitorum manus case of bending is encoded;
(4.2), the locational space to the palm place carries out binary coding;
The output information matching unit is used for inquiring about at the sign language dictionary database according to the numerical value of character string, and the result who inquires obtains text message;
Said text message is converted into the sign language information module and comprises:
Visual human's presentation space defines the unit, is used for the presentation space according to visual human's bone parameter setting visual human, defines visual human's presentation space with the spatial division rule;
Locating device is confirmed the unit, is used for noting the matrix of motion palm in each regional center of presentation space and relevant bone position;
The input information matching unit is used for retrieving at the sign language dictionary database as key word with the information of input, retrieves the coding that relevant information then extracts the sign language action;
The key frame positioning unit is used for setting visual human's bone in the position location in each zone that the sign language meaning relates to according to the coding of sign language action, obtains the visual human and moves key frame;
Virtual demonstration unit is used for generating the interpolation frame automatically by key frame, obtains the sign language animation of visual human's demonstration, is shown in screen terminal.
As preferred a kind of scheme: in hand shape information coding unit; Three kinds of states of finger are encoded to respectively and stretch 00, and half is crooked 01, and fully crooked 10; Each finger is then represented the state of the five fingers all according to above-mentioned case of bending coded system with one ten character string.
As preferred another kind of scheme: in hand shape information coding unit; The locational space at palm place is carried out the binary coding process is: with being divided into three spaces more than the mouth; Left side ear is " 1001 " with the space on a left side; Left side ear is " 1000 " to the space between the auris dextra, and auris dextra is " 1010 " with the space on the right side; With the spatial division below the mouth, more than the shoulder is three spaces, and left ear is " 0001 " with the space on a left side, and left ear is " 0000 " to the space between the auris dextra, and auris dextra is " 0010 " with the space on the right side; Space below the shoulder also is divided into three spaces, and left side shoulder is " 0101 " with the space on a left side, and a left side is takeed on the space of takeing on the right side and is " 0100 ", and right shoulder is " 0110 " with the space on the right side; Volar direction is according to the XYZ coordinate of the corresponding position tracker of palm vector, and palm up 0010, palm down 0011, palm be towards a left side 0101, and palm is towards right 0100, palm forward 1001 and palm towards health 1000.
Technical conceive of the present invention is: based on sign language action statistical law commonly used, have the hand shape of coding and the division space of gesture through constructing one, proposed to be exclusively used between sign language and the text and realized the real-time method that exchanges.Introducing according to the coded combination and the sign language dictionary database of hand shape and gesture has provided a kind of fast encoding method, is used for the identification of sign language; Provide a kind of coding/decoding method fast, be used for the synthetic of sign language.This method has the high and fireballing characteristics of decoding of code efficiency.
The present invention proposes a kind of effective sign language dictionary database method of application, in order to store the sign language coded message and to promote the synthetic application of sign language.Original animation data that will move is imported the sign language motion data base method one by one, and its database storing amount is big, and running efficiency of system is low, and the sign language composite document of formation is excessive, is not suitable for the real-time hand language translation.The database method of application that proposes is a kind of more efficient methods.
Characteristics based on space encoding of the present invention are: the locus of hand mainly concentrates on around the head in (1), the sign language commonly used, and is distributed on a small quantity around the upper part of the body health, so head space is the highest space of sign language motion frequency.Vertical direction is respectively with mouth, takes on to be separation, and be 3 horizontal spaces with spatial division, horizontal direction is 6 longitudinal spaces with left ear, auris dextra, left side shoulder, right shoulder separation with spatial division respectively.This division methods is carried out careful division to the space around the head effectively, and division is roughly then carried out in the space around the health, thereby improves the recognition efficiency in space.(2), in advance the sign language space is carried out predefine, make space have adaptivity according to the difference of the individual size of sign language equipment wearer.(3), code efficiency is high.Utilize space dividing, fully adopted the binary coding mode in the coding of hand shape and the cataloged procedure of gesture, do not use the appearance of long code word and special code word, can make the memory space of sign language word allusion quotation database littler, read-write efficiency is higher.Utilize this method that sign language motion information is encoded, can be controlled in the 0.75K according to the length data amount of sentence.(4), decoding speed is fast.Utilize coded sequence to be mapped to the characteristics of visual human's bone matrix, can effectively avoid the editing of picture or animation in the decode procedure, can be easy to realize therefore that sign language is synthetic.(5), method is simple, it is convenient to realize.Whole algorithm only adopts the scale-of-two matching operation, has avoided complex calculations, is a kind of simple and efficient coding method, can on the development platform of a plurality of different editions, transplant easily.
Beneficial effect of the present invention mainly shows: (1), by means of sign language motion space dividing and coding, realized the real-time function that sign language exchanges, reduced insignificant wait and response time; (2), the data glove that has a finger bend sensor just can realize the extraction of hand-shaped characteristic, reduces the cost of system support facility; (3), the coding be efficiently, the wearer can improve the sign language dictionary database through the input of sign language, has guaranteed the completeness of sign language vocabulary; (4), used fast synthetic sign language method can guarantee that sign language is synthetic and has a fast speeds; (5), the real-time AC system of sign language is the system with training property, and the sign language teaching and training can be provided for the user who is ignorant of sign language.
Description of drawings
Fig. 1 is a real-time hand language AC system framework synoptic diagram.
Fig. 2 is a hand shape coding key synoptic diagram.
Fig. 3 is the synoptic diagram of sign language spatial division.
Fig. 4 is a volar direction witness mark coordinate synoptic diagram.
Fig. 5 is Chinese manual alphabet figure.
Fig. 6 is the synoptic diagram of " hello " sign language.
Fig. 7 is the synoptic diagram of " Nice to see you " sign language.
Embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1~Fig. 7; A kind of real-time hand language AC system based on space encoding; Comprise the hand fractal transform that is used to detect the wearer data glove, be used to detect position tracker and the intelligent identification device that is used for carrying out Sign Language Recognition of wearer's hand place area of space according to data glove and position tracker; Said intelligent identification device comprises that sign language motion information is converted into the sign language dictionary database that text message module, text message are converted into the sign language information module and are used for text message, sign language coded message and action animation are set up corresponding sequence; Said sign language motion information is converted into the text message module and comprises: signal data collection unit; Be used for obtaining the Frame of time period, obtain importing a series of vector datas of data according to the baud rate of data glove and position tracker; The data pretreatment unit is used for according to data glove wearer's sign language custom the flexibility codomain being defined, and the use location tracker is divided area of space wearer's personal feature simultaneously; The location mouth; Left side ear-lobe, auris dextra hangs down, the position of left side shoulder and right shoulder; Sign language information characteristics extraction unit is used for the data extract gesture information according to data glove input, from direction, the positional information of the extracting data hand of position tracker input, constitutes the proper vector of input sample; Hand shape information coding unit is used for encoding according to the sign language signal of the proper vector that obtains, and obtains character string, and coding rule is following:
(4.1), each finger-joint is divided into three active states---and stretch, half crooked and crooked fully, adopt the binary coding mode that the articulations digitorum manus case of bending is encoded;
(4.2), the locational space to the palm place carries out binary coding;
The output information matching unit is used for inquiring about at the sign language dictionary database according to the numerical value of character string, and the result who inquires obtains text message;
Said text message is converted into the sign language information module and comprises: visual human's presentation space defines the unit, is used for the presentation space according to visual human's bone parameter setting visual human, defines visual human's presentation space with the spatial division rule; Locating device is confirmed the unit, is used for noting the matrix of motion palm in each regional center of presentation space and relevant bone position; The input information matching unit is used for retrieving at the sign language dictionary database as key word with the information of input, retrieves the coding that relevant information then extracts the sign language action; The key frame positioning unit is used for setting visual human's bone in the position location in each zone that the sign language meaning relates to according to the coding of sign language action, obtains the visual human and moves key frame; Virtual demonstration unit is used for generating the interpolation frame automatically by key frame, obtains the sign language animation of visual human's demonstration, is shown in screen terminal.
With reference to Fig. 2, Fig. 3, encode according to the movable information of gesture: through the sign language vocabulary experimental result, taking into full account the real-time that sign language exchanges, the sign Language Recognition Method based on space encoding that this paper provides is a kind of stable performance, discerns method rapidly.This method is carried out application note on Chinese sign language vocabulary.
When the user wears data glove, according to the size of user's hand and user different definition threshold values to degree of crook.Like data gloves raw data scope is 0~4095; For forefinger; Corresponding data show that for the numerical table less than 1862 finger is in straight configuration; Displayed value is that the numeric representation finger between the 1862-2268 is in half case of bending, and displayed value is that the numerical value between the 2268-4095 representes that then finger is in complete case of bending.Because the degree of flexibility of each finger is different, the threshold values of each finger also is not quite similar.
Position tracker is fixed on user's the wrist, and the absolute coordinates initial point is positioned at the receiver position.The angle information that position tracker obtains can be confirmed volar direction; With the user with vertical direction respectively with mouth; Shoulder is 3 horizontal spaces for separation with spatial division, and horizontal direction is 6 longitudinal spaces with left ear, auris dextra, left side shoulder, right shoulder separation with spatial division respectively.
Coding method of the present invention is: according to for the hand shape and the gesture information that obtain on the said terminal, set the encoded radio of each hand shape and gesture information successively, obtain bit stream data, corresponding dynamic link table sequence storage time of this code table.Change situation according to time and hand shape can be divided into static gesture, static compound gesture, and dynamic gesture.
(1), static gesture: the sign language hand shape of correspondence as shown in table 1, coding rule is as shown in table 1.Table 1 is the corresponding coding of Chinese character manual alphabet table:
Letter | Volar direction | Hand shape coding |
A | 0100 | 0010101010 |
B | 1001 | 1000000000 |
C | 0101 | 0101010101 |
D | 1001 | 1010101010 |
E | 1000 | 1010000000 |
F | 1000 | 1000000101 |
G | 1000 | 1000101010 |
H | 1001 | 1000001010 |
I | 1001 | 1000101010 |
J | 0101 | 1001101010 |
K | 0101 | 0000001010 |
L | 0101 | 0000101010 |
M | 1001 | 1001010110 |
N | 1001 | 1001011010 |
O | 0101 | 1001010101 |
P | 0101 | 1010000000 |
Q | 1001 | 0001011010 |
R | 1000 | 0000101010 |
S | 1001 | 0010101010 |
T | 1001 | 1000101000 |
U | 1001 | 0000000000 |
V | 1001 | 0100001010 |
W | 1001 | 0100000010 |
X | 1001 | 0101001010 |
Y | 1001 | 0010101000 |
Z | 1000 | 0100101000 |
ZH | 1000 | 0100001000 |
CH | 0011 | 0001010101 |
SH | 1001 | 0001011010 |
NG | 1000 | 1010101000 |
Table 1
(2), static compound gesture: " hello " as shown in Figure 6 sign language, according to each seasonal effect in time series hand shape of sequence of event, as shown in table 2
Time series | Volar direction | The palm position | Hand shape coding |
1 | 0011 | 0100 | 1000101010 |
2 | 0011 | 0100 | 1001101010 |
3 | 0011 | 0100 | 0010101010 |
Table 2
(3), dynamic gesture: " Nice to see you " as shown in Figure 7 sign language, according to each seasonal effect in time series hand shape of sequence of event, as shown in table 3, the left hand hand is just as Li Kede.
Time series | Volar direction | The palm position | Hand shape coding |
1 | 1000 | 0100 | 0000000000 |
2 | 1000 | 0100 | 1000101010 |
3 | 0010 | 0100 | 1000101010 |
4 | 1000 | 0100 | 0000000000 |
5 | 0011 | 1000 | 1000001010 |
6 | 0101 | 0001 | 1000101010 |
7 | 0101 | 0100 | 1000101010 |
Table 3
Above-mentioned coded message is carried out information matches via the sign language dictionary database, if each coding on corresponding each time series is all identical, then exports corresponding literal; If the coding on corresponding each time series is different, then output " unknown sign language information ", the user can demonstrate sign language again with affirmation sign language information or as new sign language information input sign language dictionary database.
When the user who is ignorant of sign language see Word message and with Word message when responding, set visual human's presentation space with reference to Fig. 3.Control the virtual bone of going into, make the palm position through each regional center in the presentation space, and record bone joint matrix and relevant information converting.Word message with input is retrieved in the sign language data dictionary as key word, retrieves the coding that relevant information then extracts the sign language action, if retrieval is less than then reminding the user to re-enter text message.According to the time series of sign language action be provided with animation time and each crucial moment point.Extract the palm coding set visual human's bone in each zone shown in Figure 3 each at crucial moment the point location position; Extract the volar direction coding and set the direction of virtual staff bone; Extract hand shape coding setting visual human and point the action of bone, thereby obtain the key frame in each time series; Generate the interpolation frame by key frame, obtain the sign language key-frame animation of visual human's demonstration, be shown in screen terminal.
Claims (3)
1. real-time hand language AC system based on space encoding; Comprise the hand fractal transform that is used to detect the wearer data glove, be used to detect position tracker and the intelligent identification device that is used for carrying out Sign Language Recognition of wearer's hand place area of space according to data glove and position tracker; It is characterized in that: said intelligent identification device comprises that sign language motion information is converted into the sign language dictionary database that text message module, text message are converted into the sign language information module and are used for text message, sign language coded message and action animation are set up corresponding sequence, and said sign language motion information is converted into the text message module and comprises:
Signal data collection unit is used for obtaining according to the baud rate of data glove and position tracker the Frame of time period, obtains importing a series of vector datas of data;
The data pretreatment unit is used for according to data glove wearer's sign language custom the flexibility codomain being defined, and the use location tracker is divided area of space wearer's personal feature simultaneously; The location mouth; Left side ear-lobe, auris dextra hangs down, the position of left side shoulder and right shoulder;
Sign language information characteristics extraction unit is used for the data extract gesture information according to data glove input, from direction, the positional information of the extracting data hand of position tracker input, constitutes the proper vector of input sample;
Hand shape information coding unit is used for encoding according to the sign language signal of the proper vector that obtains, and obtains character string, and coding rule is following:
(4.1), each finger-joint is divided into three active states---and stretch, half crooked and crooked fully, adopt the binary coding mode that the articulations digitorum manus case of bending is encoded;
(4.2), the locational space to the palm place carries out binary coding: vertical direction is respectively with mouth; Shoulder is separation; With spatial division is 3 horizontal spaces, and horizontal direction is 6 longitudinal spaces with left ear, auris dextra, left side shoulder, right shoulder separation with spatial division respectively;
The output information matching unit is used for inquiring about at the sign language dictionary database according to the numerical value of character string, and the result who inquires obtains text message;
Said text message is converted into the sign language information module and comprises:
Visual human's presentation space defines the unit, is used for the presentation space according to visual human's bone parameter setting visual human, defines visual human's presentation space with the spatial division rule, divides area of space with the position of left ear, auris dextra, left side shoulder, right shoulder;
Locating device is confirmed the unit, is used for noting the matrix of motion palm in each regional center of presentation space and relevant bone position;
The input information matching unit is used for retrieving at the sign language dictionary database as key word with the information of input, retrieves the coding that relevant information then extracts the sign language action;
The key frame positioning unit; Be used for setting the position location of visual human's bone in each zone that the sign language meaning relates to according to the coding of sign language action; Obtain the visual human and move key frame; Detailed process is following: control visual human bone, and make the palm position through each regional center in the presentation space, and record bone joint matrix and relevant information converting; Word message with input is retrieved in the sign language data dictionary as key word, retrieves the coding that relevant information then extracts the sign language action, if retrieval is less than then reminding the user to re-enter text message; According to the time series of sign language action be provided with animation time and each crucial moment point; Extract the palm coding set visual human's bone in each zone each at crucial moment the point location position; Extract the volar direction coding and set the direction of virtual staff bone; Extract hand shape coding setting visual human and point the action of bone, thereby obtain the key frame in each time series;
Virtual demonstration unit is used for generating the interpolation frame automatically by key frame, obtains the sign language animation of visual human's demonstration, is shown in screen terminal.
2. the real-time hand language AC system based on space encoding as claimed in claim 1; It is characterized in that: in hand shape information coding unit; Three kinds of states of finger are encoded to respectively and stretch 00, and half is crooked 01, and fully crooked 10; Each finger is then represented the state of the five fingers all according to above-mentioned case of bending coded system with one ten character string.
3. according to claim 1 or claim 2 real-time hand language AC system based on space encoding; It is characterized in that: in hand shape information coding unit; The locational space at palm place is carried out the binary coding process is: with being divided into three spaces more than the mouth; Left side ear is " 1001 " with the space on a left side, and left ear is " 1000 " to the space between the auris dextra, and auris dextra is " 1010 " with the space on the right side; With the spatial division below the mouth, more than the shoulder is three spaces, and left ear is " 0001 " with the space on a left side, and left ear is " 0000 " to the space between the auris dextra, and auris dextra is " 0010 " with the space on the right side; Space below the shoulder also is divided into three spaces, and left side shoulder is " 0101 " with the space on a left side, and a left side is takeed on the space of takeing on the right side and is " 0100 ", and right shoulder is " 0110 " with the space on the right side; Volar direction is according to the XYZ coordinate of the corresponding position tracker of palm vector, and palm up 0010, palm down 0011, palm be towards a left side 0101, and palm is towards right 0100, palm forward 1001 and palm towards health 1000.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008101635548A CN101609618B (en) | 2008-12-23 | 2008-12-23 | Real-time hand language communication system based on special codes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008101635548A CN101609618B (en) | 2008-12-23 | 2008-12-23 | Real-time hand language communication system based on special codes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101609618A CN101609618A (en) | 2009-12-23 |
CN101609618B true CN101609618B (en) | 2012-05-30 |
Family
ID=41483357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008101635548A Expired - Fee Related CN101609618B (en) | 2008-12-23 | 2008-12-23 | Real-time hand language communication system based on special codes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101609618B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102193633B (en) * | 2011-05-25 | 2012-12-12 | 广州畅途软件有限公司 | dynamic sign language recognition method for data glove |
CN103309434B (en) * | 2012-03-12 | 2016-03-30 | 联想(北京)有限公司 | A kind of instruction identification method and electronic equipment |
CN102723019A (en) * | 2012-05-23 | 2012-10-10 | 苏州奇可思信息科技有限公司 | Sign language teaching system |
CN103337079A (en) * | 2013-07-09 | 2013-10-02 | 广州新节奏智能科技有限公司 | Virtual augmented reality teaching method and device |
CN104462162A (en) * | 2013-11-25 | 2015-03-25 | 安徽寰智信息科技股份有限公司 | Novel sign language recognition and collection method and device |
CN104134060B (en) * | 2014-08-03 | 2018-01-05 | 上海威璞电子科技有限公司 | Sign language interpreter and display sonification system based on electromyographic signal and motion sensor |
CN104599553B (en) * | 2014-12-29 | 2017-01-25 | 闽南师范大学 | Barcode recognition-based sign language teaching system and method |
CN104599554B (en) * | 2014-12-29 | 2017-01-25 | 闽南师范大学 | Two-dimensional code recognition-based sign language teaching system and method |
CN104765455A (en) * | 2015-04-07 | 2015-07-08 | 中国海洋大学 | Man-machine interactive system based on striking vibration |
CN106056994A (en) * | 2016-08-16 | 2016-10-26 | 安徽渔之蓝教育软件技术有限公司 | Assisted learning system for gesture language vocational education |
CN108427910B (en) * | 2018-01-30 | 2021-09-21 | 浙江凡聚科技有限公司 | Deep neural network AR sign language translation learning method, client and server |
CN110491250A (en) * | 2019-08-02 | 2019-11-22 | 安徽易百互联科技有限公司 | A kind of deaf-mute's tutoring system |
CN113657101A (en) * | 2021-07-20 | 2021-11-16 | 北京搜狗科技发展有限公司 | Data processing method and device and data processing device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1506871A (en) * | 2002-12-06 | 2004-06-23 | 徐晓毅 | Sign language translating system |
CN1664807A (en) * | 2005-03-21 | 2005-09-07 | 山东省气象局 | Adaptation of dactylology weather forecast in network |
CN101005574A (en) * | 2006-01-17 | 2007-07-25 | 上海中科计算技术研究所 | Video frequency virtual humance sign language compiling system |
-
2008
- 2008-12-23 CN CN2008101635548A patent/CN101609618B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1506871A (en) * | 2002-12-06 | 2004-06-23 | 徐晓毅 | Sign language translating system |
CN1664807A (en) * | 2005-03-21 | 2005-09-07 | 山东省气象局 | Adaptation of dactylology weather forecast in network |
CN101005574A (en) * | 2006-01-17 | 2007-07-25 | 上海中科计算技术研究所 | Video frequency virtual humance sign language compiling system |
Also Published As
Publication number | Publication date |
---|---|
CN101609618A (en) | 2009-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101609618B (en) | Real-time hand language communication system based on special codes | |
CN101577062B (en) | Space encoding-based method for realizing interconversion between sign language motion information and text message | |
CN105868715B (en) | Gesture recognition method and device and gesture learning system | |
CN109190578B (en) | The sign language video interpretation method merged based on convolution network with Recognition with Recurrent Neural Network | |
CN104199834B (en) | The method and system for obtaining remote resource from information carrier surface interactive mode and exporting | |
CN108776773B (en) | Three-dimensional gesture recognition method and interaction system based on depth image | |
CN106033435B (en) | Item identification method and device, indoor map generation method and device | |
CN105426850A (en) | Human face identification based related information pushing device and method | |
CN102831380A (en) | Body action identification method and system based on depth image induction | |
CN107301820A (en) | It is a kind of to recognize the intelligent advisement player and its control method of spectators' type | |
CN104134060A (en) | Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors | |
CN107678550A (en) | A kind of sign language gesture recognition system based on data glove | |
CN103745423B (en) | A kind of shape of the mouth as one speaks teaching system and teaching method | |
CN112508750A (en) | Artificial intelligence teaching device, method, equipment and storage medium | |
CN102567716A (en) | Face synthetic system and implementation method | |
CN110275987A (en) | Intelligent tutoring consultant generation method, system, equipment and storage medium | |
CN108960171B (en) | Method for converting gesture recognition into identity recognition based on feature transfer learning | |
CN114998983A (en) | Limb rehabilitation method based on augmented reality technology and posture recognition technology | |
CN108510988A (en) | A kind of speech recognition system and method for deaf-mute | |
CN111723779A (en) | Chinese sign language recognition system based on deep learning | |
CN114842547A (en) | Sign language teaching method, device and system based on gesture action generation and recognition | |
CN115359394A (en) | Identification method based on multi-mode fusion and application thereof | |
CN117055724A (en) | Generating type teaching resource system in virtual teaching scene and working method thereof | |
CN115188074A (en) | Interactive physical training evaluation method, device and system and computer equipment | |
Loeding et al. | Progress in automated computer recognition of sign language |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120530 |