CN101609618A - Real-time hand language AC system based on space encoding - Google Patents

Real-time hand language AC system based on space encoding Download PDF

Info

Publication number
CN101609618A
CN101609618A CNA2008101635548A CN200810163554A CN101609618A CN 101609618 A CN101609618 A CN 101609618A CN A2008101635548 A CNA2008101635548 A CN A2008101635548A CN 200810163554 A CN200810163554 A CN 200810163554A CN 101609618 A CN101609618 A CN 101609618A
Authority
CN
China
Prior art keywords
sign language
space
information
palm
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101635548A
Other languages
Chinese (zh)
Other versions
CN101609618B (en
Inventor
张宁宁
顾容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN2008101635548A priority Critical patent/CN101609618B/en
Publication of CN101609618A publication Critical patent/CN101609618A/en
Application granted granted Critical
Publication of CN101609618B publication Critical patent/CN101609618B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A kind of real-time hand language AC system based on space encoding, comprise the data glove of the hand fractal transform that is used to detect the wearer, the position tracker of hand place area of space that is used to detect the wearer and the intelligent identification device that is used for carrying out according to data glove and position tracker Sign Language Recognition, described intelligent identification device comprises that sign language motion information is converted into the sign language dictionary database that text message module, text message are converted into the sign language information module and are used for text message, sign language coded message and action animation are set up corresponding sequence.The invention provides the real-time hand language AC system that a kind of rapidity is good, real-time, cost is low based on space encoding.

Description

Real-time hand language AC system based on space encoding
Technical field
The present invention relates to realize between a kind of deaf-mute and the normal person system that exchanges in real time, especially a kind of sign language AC system.
Background technology
Sign language is the language that the deaf-mute uses.It is to be aided with the more stable expression system of expressing one's feelings posture and constituting by the action of hand shape, is a kind of special language of communicating by action/vision.There is more than 2,000 ten thousand person hard of hearing in China, and they mainly use sign language to exchange.Because sign language is not most of people's of society a language commonly used, this has restricted they and social exchanging to a great extent.The development of sign language AC system can address this problem to a certain extent, plays a significant role for the deaf person creates the barrier-free environment aspect, and the Chinese sign language of promoting standard is had great role.
Along with the growth of society to deaf-mute love, more and more scholars expert's sign Language Recognition that begins one's study is to realize the interchange between normal person and the deaf-mute better.Present sign Language Recognition mainly is divided into based on the Sign Language Recognition of data glove with based on the sign Language Recognition of vision (image).For the identification of sign language, should adopt data glove as hand shape input equipment, and adopt position tracker to gather the motion of palm with space-time concurrency.Because compare with video camera, the data that data glove and position tracker are gathered are succinct, accurate, and these two kinds of collecting devices easily obtain the feature that shows the sign language space-time characterisation, as the finger-joint movable information, palm movable informations etc., the data of data glove collection are not subjected to the influence of environmental changes such as illumination.A lot of in the world experts are devoted to the research of sign Language Recognition Method, realized of the conversion of sign language signal to text, acoustic information, also there are some experts to adopt electronic equipment that text message is converted into the sign language animation, thereby realize that the unidirectional sign language of people and machine terminal exchanges.
Yet, the communication disorder of deaf-mute's overwhelming majority be with process that the normal person exchanges in produce.If realize real-time interchange, just necessarily required to respond input-output device rapidly, also require identification and method for transformation efficiently simultaneously, can realize the mutual conversion of sign language and text message in moment.Promote in order further to realize, used equipment cost can not be too expensive.
Summary of the invention
The deficiency poor for the rapidity that overcomes existing sign Language Recognition, that real-time is poor, cost is high the invention provides the real-time hand language AC system based on space encoding that a kind of rapidity is good, real-time, cost is low.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of real-time hand language AC system based on space encoding, the data glove that comprises the hand fractal transform that is used to detect the wearer, be used to detect wearer's position tracker of hand place area of space and the intelligent identification device that is used for carrying out Sign Language Recognition according to data glove and position tracker, described intelligent identification device comprises that sign language motion information is converted into the text message module, text message is converted into the sign language information module and is used for text message, sign language coded message and action animation are set up the sign language dictionary database of corresponding sequence, and described sign language motion information is converted into the text message module and comprises:
Signal data collection unit is used for obtaining according to the baud rate of data glove and position tracker the Frame of time period, obtains importing a series of vector datas of data;
The data pretreatment unit is used for according to data glove wearer's sign language custom the flexibility codomain being defined, and the use location tracker is divided area of space wearer's personal feature simultaneously, the location mouth, left side ear-lobe, auris dextra hangs down, the position of left side shoulder and right shoulder;
Sign language information characteristics extraction unit is used for the data extract gesture information according to data glove input, from direction, the positional information of the extracting data hand of position tracker input, constitutes the proper vector of input sample;
Hand shape information coding unit is used for encoding according to the sign language signal of the proper vector that obtains, and obtains character string, and coding rule is as follows:
(4.1), each finger-joint is divided into three active states---and stretch, half crooked and crooked fully, adopt the binary coding mode that the articulations digitorum manus case of bending is encoded;
(4.2), the locational space to the palm place carries out binary coding;
The output information matching unit is used for inquiring about at the sign language dictionary database according to the numerical value of character string, and the result who inquires obtains text message;
Described text message is converted into the sign language information module and comprises:
Visual human's presentation space defines the unit, is used for the presentation space according to visual human's bone parameter setting visual human, defines visual human's presentation space with the spatial division rule;
The locating device determining unit is used for noting the matrix of motion palm in each regional center of presentation space and relevant bone position;
The input information matching unit is used for retrieving at the sign language dictionary database as key word with the information of input, retrieves the coding that relevant information then extracts the sign language action;
The key frame positioning unit is used for setting visual human's bone in the position location in each zone that the sign language meaning relates to according to the coding of sign language action, obtains the visual human and moves key frame;
Virtual demonstration unit is used for generating the interpolation frame automatically by key frame, obtains the sign language animation of visual human's demonstration, is shown in screen terminal.
As preferred a kind of scheme: in hand shape information coding unit, three kinds of states of finger are encoded to respectively and stretch 00, and half is crooked 01, and fully crooked 10, each finger is then represented the state of the five fingers all according to above-mentioned case of bending coded system with one ten character string.
As preferred another kind of scheme: in hand shape information coding unit, the locational space at palm place is carried out the binary coding process is: will be divided into three spaces more than the mouth, left side ear is " 1001 " with the space on a left side, left side ear is " 1000 " to the space between the auris dextra, and auris dextra is " 1010 " with the space on the right side; With the spatial division below the mouth, more than the shoulder is three spaces, and left ear is " 0001 " with the space on a left side, and left ear is " 0000 " to the space between the auris dextra, and auris dextra is " 0010 " with the space on the right side; The space that shoulder is following also is divided into three spaces, and left side shoulder is " 0101 " with the space on a left side, and a left side is takeed on the space of takeing on the right side and is " 0100 ", and right shoulder is " 0110 " with the space on the right side; Volar direction is according to the XYZ coordinate of the position tracker of palm vector correspondence, and palm up 0010, palm down 0011, palm be towards a left side 0101, and palm is towards right 0100, palm forward 1001 and palm towards health 1000.
Technical conceive of the present invention is: based on sign language action statistical law commonly used, have the hand shape of coding and the division space of gesture by constructing one, proposed to be exclusively used in and realized the method that exchanges in real time between sign language and the text.Introducing according to the coded combination and the sign language dictionary database of hand shape and gesture has provided a kind of fast encoding method, is used for the identification of sign language; Provide a kind of coding/decoding method fast, be used for the synthetic of sign language.This method has the high and fireballing characteristics of decoding of code efficiency.
The present invention proposes a kind of effective sign language dictionary database using method, in order to store the sign language coded message and to promote the synthetic application of sign language.Original animation data that will move is imported the sign language motion data base method one by one, and its database storing amount is big, and running efficiency of system is low, and the sign language composite document of formation is excessive, is not suitable for the real-time hand language translation.The database using method that proposes is a kind of more efficient methods.
Characteristics based on space encoding of the present invention are: the locus of hand mainly concentrates on around the head in (1), the sign language commonly used, and is distributed on a small quantity around the upper part of the body health, so head space is the highest space of sign language motion frequency.Vertical direction is respectively with mouth, takes on to be separation, and be 3 horizontal spaces with spatial division, horizontal direction is 6 longitudinal spaces with left ear, auris dextra, left side shoulder, right shoulder separation with spatial division respectively.This division methods is carried out careful division to the space around the head effectively, roughly division is then carried out in the space around the health, thereby improved the recognition efficiency in space.(2), in advance the sign language space is carried out predefine, make space have adaptivity according to the difference of the individual size of sign language equipment wearer.(3), code efficiency height.Utilize space dividing, fully adopted the binary coding mode in the coding of hand shape and the cataloged procedure of gesture, do not use the appearance of long code word and special code word, can make the memory space of sign language word allusion quotation database littler, read-write efficiency is higher.Utilize this method that sign language motion information is encoded, can be controlled in the 0.75K according to the length data amount of sentence.(4), decoding speed is fast.Utilize coded sequence to be mapped to the characteristics of visual human's bone matrix, can effectively avoid the editing of picture or animation in the decode procedure, can be easy to realize therefore that sign language is synthetic.(5), method is simple, it is convenient to realize.Whole algorithm only adopts the scale-of-two matching operation, has avoided complex calculations, is a kind of simple and efficient coding method, can transplant on the development platform of a plurality of different editions easily.
Beneficial effect of the present invention mainly shows: (1), by means of sign language motion space dividing and coding, realized the real-time function that sign language exchanges, reduced insignificant wait and response time; (2), have the extraction that the data glove of pointing bend sensor just can realize hand-shaped characteristic, the cost of reduction system support facility; (3), the coding be efficiently, the wearer can improve the sign language dictionary database by the input of sign language, has guaranteed the completeness of sign language vocabulary; (4), used fast synthetic sign language method can guarantee that sign language is synthetic and has a fast speeds; (5), the real-time AC system of sign language is the system with training property, and the sign language teaching and training can be provided for the user who is ignorant of sign language.
Description of drawings
Fig. 1 is a real-time hand language AC system framework synoptic diagram.
Fig. 2 is a hand shape coding key synoptic diagram.
Fig. 3 is the synoptic diagram of sign language spatial division.
Fig. 4 is a volar direction witness mark coordinate synoptic diagram.
Fig. 5 is Chinese manual alphabet figure.
Fig. 6 is the synoptic diagram of " hello " sign language.
Fig. 7 is the synoptic diagram of " Nice to see you " sign language.
Embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1~Fig. 7, a kind of real-time hand language AC system based on space encoding, the data glove that comprises the hand fractal transform that is used to detect the wearer, be used to detect wearer's position tracker of hand place area of space and the intelligent identification device that is used for carrying out Sign Language Recognition according to data glove and position tracker, described intelligent identification device comprises that sign language motion information is converted into the text message module, text message is converted into the sign language information module and is used for text message, sign language coded message and action animation are set up the sign language dictionary database of corresponding sequence, described sign language motion information is converted into the text message module and comprises: signal data collection unit, be used for obtaining the Frame of time period, obtain importing a series of vector datas of data according to the baud rate of data glove and position tracker; The data pretreatment unit is used for according to data glove wearer's sign language custom the flexibility codomain being defined, and the use location tracker is divided area of space wearer's personal feature simultaneously, the location mouth, left side ear-lobe, auris dextra hangs down, the position of left side shoulder and right shoulder; Sign language information characteristics extraction unit is used for the data extract gesture information according to data glove input, from direction, the positional information of the extracting data hand of position tracker input, constitutes the proper vector of input sample; Hand shape information coding unit is used for encoding according to the sign language signal of the proper vector that obtains, and obtains character string, and coding rule is as follows:
(4.1), each finger-joint is divided into three active states---and stretch, half crooked and crooked fully, adopt the binary coding mode that the articulations digitorum manus case of bending is encoded;
(4.2), the locational space to the palm place carries out binary coding;
The output information matching unit is used for inquiring about at the sign language dictionary database according to the numerical value of character string, and the result who inquires obtains text message;
Described text message is converted into the sign language information module and comprises: visual human's presentation space defines the unit, is used for the presentation space according to visual human's bone parameter setting visual human, defines visual human's presentation space with the spatial division rule; The locating device determining unit is used for noting the matrix of motion palm in each regional center of presentation space and relevant bone position; The input information matching unit is used for retrieving at the sign language dictionary database as key word with the information of input, retrieves the coding that relevant information then extracts the sign language action; The key frame positioning unit is used for setting visual human's bone in the position location in each zone that the sign language meaning relates to according to the coding of sign language action, obtains the visual human and moves key frame; Virtual demonstration unit is used for generating the interpolation frame automatically by key frame, obtains the sign language animation of visual human's demonstration, is shown in screen terminal.
With reference to Fig. 2, Fig. 3, encode according to the movable information of gesture: by the sign language vocabulary experimental result, taking into full account the real-time that sign language exchanges, the sign Language Recognition Method based on space encoding that this paper provides is a kind of stable performance, discerns method rapidly.This method is carried out application note on Chinese sign language vocabulary.
When the user wears data glove, according to the size of user's hand and user different definition threshold values to degree of crook.As data gloves raw data scope is 0~4095, for forefinger, corresponding data show that for the numerical table less than 1862 finger is in straight configuration, displayed value is that the numeric representation finger between the 1862-2268 is in half case of bending, and displayed value is that the numerical value between the 2268-4095 represents that then finger is in complete case of bending.Because the degree of flexibility difference of each finger, the threshold values of each finger also is not quite similar.
Position tracker is fixed on user's the wrist, and the absolute coordinates initial point is positioned at the receiver position.The angle information that position tracker obtains can be determined volar direction, with the user with vertical direction respectively with mouth, shoulder is 3 horizontal spaces for separation with spatial division, and horizontal direction is 6 longitudinal spaces with left ear, auris dextra, left side shoulder, right shoulder separation with spatial division respectively.
Coding method of the present invention is: according to for the hand shape and the gesture information that obtain on the described terminal, set the encoded radio of each hand shape and gesture information successively, obtain bit stream data, corresponding dynamic link table sequence storage time of this code table.Change situation according to time and hand shape can be divided into static gesture, static compound gesture, and dynamic gesture.
(1), static gesture: the sign language hand shape of correspondence as shown in table 1, coding rule is as shown in table 1.Table 1 is the corresponding coding of Chinese character manual alphabet table:
Letter Volar direction Hand shape coding
??A ??0100 ??0010101010
??B ??1001 ??1000000000
??C ??0101 ??0101010101
??D ??1001 ??1010101010
??E ??1000 ??1010000000
??F ??1000 ??1000000101
??G ??1000 ??1000101010
??H ??1001 ??1000001010
??I ??1001 ??1000101010
??J ??0101 ??1001101010
??K ??0101 ??0000001010
??L ??0101 ??0000101010
??M ??1001 ??1001010110
??N ??1001 ??1001011010
??O ??0101 ??1001010101
??P ??0101 ??1010000000
??Q ??1001 ??0001011010
??R ??1000 ??0000101010
??S ??1001 ??0010101010
??T ??1001 ??1000101000
??U ??1001 ??0000000000
??V ??1001 ??0100001010
??W ??1001 ??0100000010
??X ??1001 ??0101001010
??Y ??1001 ??0010101000
??Z ??1000 ??0100101000
??ZH ??1000 ??0100001000
??CH ??0011 ??0001010101
??SH ??1001 ??0001011010
??NG ??1000 ??1010101000
Table 1
(2), static compound gesture: " hello " sign language as shown in Figure 6, according to each seasonal effect in time series hand shape of sequence of event, as shown in table 2
Time series Volar direction The palm position Hand shape coding
??1 ??0011 ??0100 ??1000101010
??2 ??0011 ??0100 ??1001101010
??3 ??0011 ??0100 ??0010101010
Table 2
(3), dynamic gesture: " Nice to see you " sign language as shown in Figure 7, according to each seasonal effect in time series hand shape of sequence of event, as shown in table 3, the left hand hand is just as Li Kede.
Time series Volar direction The palm position Hand shape coding
??1 ??1000 ??0100 ??0000000000
??2 ??1000 ??0100 ??1000101010
??3 ??0010 ??0100 ??1000101010
??4 ??1000 ??0100 ??0000000000
??5 ??0011 ??1000 ??1000001010
??6 ??0101 ??0001 ??1000101010
??7 ??0101 ??0100 ??1000101010
Table 3
Above-mentioned coded message is carried out information matches via the sign language dictionary database, if each coding on corresponding each time series is all identical, then exports corresponding literal; If the coding difference on corresponding each time series, then output " unknown sign language information ", the user can demonstrate sign language again with affirmation sign language information or as new sign language information input sign language dictionary database.
When the user who is ignorant of sign language see Word message and with Word message when responding, set visual human's presentation space with reference to Fig. 3.Control the virtual bone of going into, make the palm position through each regional center in the presentation space, and record bone joint matrix and relevant information converting.Word message with input is retrieved in the sign language data dictionary as key word, retrieves the coding that relevant information then extracts the sign language action, if retrieval is less than then reminding the user to re-enter text message.According to the time series of sign language action be provided with animation time and each crucial moment point.Extract the palm coding set visual human's bone in each zone shown in Figure 3 each at crucial moment the point location position, extract the volar direction coding and set the direction of virtual staff bone, extract hand shape coding setting visual human and point the action of bone, thereby obtain the key frame in each time series; Generate the interpolation frame by key frame, obtain the sign language key-frame animation of visual human's demonstration, be shown in screen terminal.

Claims (3)

1, a kind of real-time hand language AC system based on space encoding, the data glove that comprises the hand fractal transform that is used to detect the wearer, be used to detect wearer's position tracker of hand place area of space and the intelligent identification device that is used for carrying out Sign Language Recognition according to data glove and position tracker, it is characterized in that: described intelligent identification device comprises that sign language motion information is converted into the text message module, text message is converted into the sign language information module and is used for text message, sign language coded message and action animation are set up the sign language dictionary database of corresponding sequence, and described sign language motion information is converted into the text message module and comprises:
Signal data collection unit is used for obtaining according to the baud rate of data glove and position tracker the Frame of time period, obtains importing a series of vector datas of data;
The data pretreatment unit is used for according to data glove wearer's sign language custom the flexibility codomain being defined, and the use location tracker is divided area of space wearer's personal feature simultaneously, the location mouth, left side ear-lobe, auris dextra hangs down, the position of left side shoulder and right shoulder;
Sign language information characteristics extraction unit is used for the data extract gesture information according to data glove input, from direction, the positional information of the extracting data hand of position tracker input, constitutes the proper vector of input sample;
Hand shape information coding unit is used for encoding according to the sign language signal of the proper vector that obtains, and obtains character string, and coding rule is as follows:
(4.1), each finger-joint is divided into three active states---and stretch, half crooked and crooked fully, adopt the binary coding mode that the articulations digitorum manus case of bending is encoded;
(4.2), the locational space to the palm place carries out binary coding;
The output information matching unit is used for inquiring about at the sign language dictionary database according to the numerical value of character string, and the result who inquires obtains text message;
Described text message is converted into the sign language information module and comprises:
Visual human's presentation space defines the unit, is used for the presentation space according to visual human's bone parameter setting visual human, defines visual human's presentation space with the spatial division rule;
The locating device determining unit is used for noting the matrix of motion palm in each regional center of presentation space and relevant bone position;
The input information matching unit is used for retrieving at the sign language dictionary database as key word with the information of input, retrieves the coding that relevant information then extracts the sign language action;
The key frame positioning unit is used for setting visual human's bone in the position location in each zone that the sign language meaning relates to according to the coding of sign language action, obtains the visual human and moves key frame;
Virtual demonstration unit is used for generating the interpolation frame automatically by key frame, obtains the sign language animation of visual human's demonstration, is shown in screen terminal.
2, the real-time hand language AC system based on space encoding as claimed in claim 1, it is characterized in that: in hand shape information coding unit, three kinds of states of finger are encoded to respectively and stretch 00, half crooked 01, bending 10 fully, each finger is then represented the state of the five fingers all according to above-mentioned case of bending coded system with one ten character string.
3, the real-time hand language AC system based on space encoding as claimed in claim 1 or 2, it is characterized in that: in hand shape information coding unit, the locational space at palm place is carried out the binary coding process is: will be divided into three spaces more than the mouth, left side ear is " 1001 " with the space on a left side, left side ear is " 1000 " to the space between the auris dextra, and auris dextra is " 1010 " with the space on the right side; With the spatial division below the mouth, more than the shoulder is three spaces, and left ear is " 0001 " with the space on a left side, and left ear is " 0000 " to the space between the auris dextra, and auris dextra is " 0010 " with the space on the right side; The space that shoulder is following also is divided into three spaces, and left side shoulder is " 0101 " with the space on a left side, and a left side is takeed on the space of takeing on the right side and is " 0100 ", and right shoulder is " 0110 " with the space on the right side; Volar direction is according to the XYZ coordinate of the position tracker of palm vector correspondence, and palm up 0010, palm down 0011, palm be towards a left side 0101, and palm is towards right 0100, palm forward 1001 and palm towards health 1000.
CN2008101635548A 2008-12-23 2008-12-23 Real-time hand language communication system based on special codes Expired - Fee Related CN101609618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101635548A CN101609618B (en) 2008-12-23 2008-12-23 Real-time hand language communication system based on special codes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101635548A CN101609618B (en) 2008-12-23 2008-12-23 Real-time hand language communication system based on special codes

Publications (2)

Publication Number Publication Date
CN101609618A true CN101609618A (en) 2009-12-23
CN101609618B CN101609618B (en) 2012-05-30

Family

ID=41483357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101635548A Expired - Fee Related CN101609618B (en) 2008-12-23 2008-12-23 Real-time hand language communication system based on special codes

Country Status (1)

Country Link
CN (1) CN101609618B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193633A (en) * 2011-05-25 2011-09-21 广州畅途软件有限公司 dynamic sign language recognition method for data glove
CN102723019A (en) * 2012-05-23 2012-10-10 苏州奇可思信息科技有限公司 Sign language teaching system
CN103309434A (en) * 2012-03-12 2013-09-18 联想(北京)有限公司 Instruction identification method and electronic equipment
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN104134060A (en) * 2014-08-03 2014-11-05 上海威璞电子科技有限公司 Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors
CN104462162A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Novel sign language recognition and collection method and device
CN104599553A (en) * 2014-12-29 2015-05-06 闽南师范大学 Barcode recognition-based sign language teaching system and method
CN104765455A (en) * 2015-04-07 2015-07-08 中国海洋大学 Man-machine interactive system based on striking vibration
CN106056994A (en) * 2016-08-16 2016-10-26 安徽渔之蓝教育软件技术有限公司 Assisted learning system for gesture language vocational education
CN104599554B (en) * 2014-12-29 2017-01-25 闽南师范大学 Two-dimensional code recognition-based sign language teaching system and method
CN108427910A (en) * 2018-01-30 2018-08-21 浙江凡聚科技有限公司 Deep-neural-network AR sign language interpreters learning method, client and server
CN110491250A (en) * 2019-08-02 2019-11-22 安徽易百互联科技有限公司 A kind of deaf-mute's tutoring system
CN113657101A (en) * 2021-07-20 2021-11-16 北京搜狗科技发展有限公司 Data processing method and device and data processing device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1506871A (en) * 2002-12-06 2004-06-23 徐晓毅 Sign language translating system
CN1664807A (en) * 2005-03-21 2005-09-07 山东省气象局 Adaptation of dactylology weather forecast in network
CN101005574A (en) * 2006-01-17 2007-07-25 上海中科计算技术研究所 Video frequency virtual humance sign language compiling system

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193633A (en) * 2011-05-25 2011-09-21 广州畅途软件有限公司 dynamic sign language recognition method for data glove
CN103309434B (en) * 2012-03-12 2016-03-30 联想(北京)有限公司 A kind of instruction identification method and electronic equipment
CN103309434A (en) * 2012-03-12 2013-09-18 联想(北京)有限公司 Instruction identification method and electronic equipment
CN102723019A (en) * 2012-05-23 2012-10-10 苏州奇可思信息科技有限公司 Sign language teaching system
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN104462162A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Novel sign language recognition and collection method and device
CN104134060A (en) * 2014-08-03 2014-11-05 上海威璞电子科技有限公司 Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors
CN104134060B (en) * 2014-08-03 2018-01-05 上海威璞电子科技有限公司 Sign language interpreter and display sonification system based on electromyographic signal and motion sensor
CN104599553A (en) * 2014-12-29 2015-05-06 闽南师范大学 Barcode recognition-based sign language teaching system and method
CN104599554B (en) * 2014-12-29 2017-01-25 闽南师范大学 Two-dimensional code recognition-based sign language teaching system and method
CN104599553B (en) * 2014-12-29 2017-01-25 闽南师范大学 Barcode recognition-based sign language teaching system and method
CN104765455A (en) * 2015-04-07 2015-07-08 中国海洋大学 Man-machine interactive system based on striking vibration
CN106056994A (en) * 2016-08-16 2016-10-26 安徽渔之蓝教育软件技术有限公司 Assisted learning system for gesture language vocational education
CN108427910A (en) * 2018-01-30 2018-08-21 浙江凡聚科技有限公司 Deep-neural-network AR sign language interpreters learning method, client and server
CN110491250A (en) * 2019-08-02 2019-11-22 安徽易百互联科技有限公司 A kind of deaf-mute's tutoring system
CN113657101A (en) * 2021-07-20 2021-11-16 北京搜狗科技发展有限公司 Data processing method and device and data processing device

Also Published As

Publication number Publication date
CN101609618B (en) 2012-05-30

Similar Documents

Publication Publication Date Title
CN101609618B (en) Real-time hand language communication system based on special codes
CN101577062B (en) Space encoding-based method for realizing interconversion between sign language motion information and text message
CN105868715B (en) Gesture recognition method and device and gesture learning system
CN109190578B (en) The sign language video interpretation method merged based on convolution network with Recognition with Recurrent Neural Network
CN106033435B (en) Item identification method and device, indoor map generation method and device
CN109086706B (en) Motion recognition method based on segmentation human body model applied to human-computer cooperation
CN107301820A (en) It is a kind of to recognize the intelligent advisement player and its control method of spectators' type
CN107678550A (en) A kind of sign language gesture recognition system based on data glove
CN104134060A (en) Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors
CN102831380A (en) Body action identification method and system based on depth image induction
CN103745423B (en) A kind of shape of the mouth as one speaks teaching system and teaching method
CN111027584A (en) Classroom behavior identification method and device
CN114998983A (en) Limb rehabilitation method based on augmented reality technology and posture recognition technology
CN110275987A (en) Intelligent tutoring consultant generation method, system, equipment and storage medium
CN102567716A (en) Face synthetic system and implementation method
CN112508750A (en) Artificial intelligence teaching device, method, equipment and storage medium
CN108960171B (en) Method for converting gesture recognition into identity recognition based on feature transfer learning
CN117055724A (en) Generating type teaching resource system in virtual teaching scene and working method thereof
CN111723779A (en) Chinese sign language recognition system based on deep learning
CN114842547A (en) Sign language teaching method, device and system based on gesture action generation and recognition
CN115359394A (en) Identification method based on multi-mode fusion and application thereof
CN115188074A (en) Interactive physical training evaluation method, device and system and computer equipment
Wang et al. Labanotation generation from motion capture data for protection of folk dance
Ji et al. 3D hand gesture coding for sign language learning
KR20210018028A (en) Handwriting and arm movement learning-based sign language translation system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120530