CN115035275A - Gesture language action linkage generation method - Google Patents

Gesture language action linkage generation method Download PDF

Info

Publication number
CN115035275A
CN115035275A CN202210318044.3A CN202210318044A CN115035275A CN 115035275 A CN115035275 A CN 115035275A CN 202210318044 A CN202210318044 A CN 202210318044A CN 115035275 A CN115035275 A CN 115035275A
Authority
CN
China
Prior art keywords
hand
area
action
transition
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210318044.3A
Other languages
Chinese (zh)
Inventor
王春成
刘鑫
张冰
王恒
郑媛媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Doreal Software Co ltd
Original Assignee
Dalian Doreal Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Doreal Software Co ltd filed Critical Dalian Doreal Software Co ltd
Priority to CN202210318044.3A priority Critical patent/CN115035275A/en
Publication of CN115035275A publication Critical patent/CN115035275A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention belongs to the technical field of sign language translation, and particularly relates to a gesture language action linkage generation method which comprises the following specific steps: the method comprises the following steps: according to the standard word library of the gesture language, the hand action range of all the gesture languages is defined, the hand action range is divided into N areas, and the central position of each area is recorded; step two: collecting hand transition actions between the center positions of every two areas, and establishing a transition action library; step three: and selecting a region according to the hand position, corresponding hand position transition in the standard word sequence to the transition between the regions, reading corresponding hand transition actions in a transition action library, and generating gesture language connection actions. The technical scheme of the invention ensures smooth and real connection between the two gesture language actions and can enhance the visual effect. And the transition action can be directly called from the transition action library, so that the calculation amount is reduced, the program is saved, and the operation processing speed is accelerated.

Description

Gesture language action linkage generation method
Technical Field
The invention belongs to the technical field of sign language translation, and particularly relates to a gesture language action linkage generation method.
Background
The national common sign language word list, the national universal Braille scheme, is formally released and implemented in 2018, obviously includes sign languages widely used in the real life of hearing-disabled people, replaces a plurality of gesture languages corresponding to Chinese characters, pays attention to describing the change of body state action and facial expression when the gesture languages are expressed, and emphasizes on the language characteristics of representing the ideographical expressions of the gesture languages. The expression of the gesture language depends on real human specific motion, along with the development of artificial intelligence and the improvement of algorithm computing power, the gesture language motion of the 3D virtual human can be realized, the gesture language is expressed instead of a real human, and the auditory handicapped person is served. In order to make the gesture language movements coherent and real and ensure the gesture language expression meaning to be accurate, the research of the gesture language movement joining technology has important significance.
The implementation method for converting the language characters into the gesture language actions of the 3D virtual human in real time comprises the following steps: 1. and according to the standard word library of the gesture language, making corresponding gesture language actions of the 3D virtual human on all words in the word library as an action library. 2. Characters in a sentence are divided into standard word sequences, and 3D virtual human actions corresponding to each standard word are matched from an action library. 3. The 3D virtual human actions corresponding to each standard word in the standard word sequence are connected in series to form a set of continuous actions, the 3D virtual human is driven to make gesture language actions, and the gesture language actions are output to a screen.
The starting positions of the two hands of each independent standard gesture language action are different, and the postures of the two hands are different. In the step 3 of the "implementation method", it is assumed that the ending position of the left hand of the previous standard action is a, and the starting position of the left hand of the next standard action is B, because a and B are at different positions, if a and B are directly connected in series, after a gesture action is completed, the hand jumps from the position a to the position B, and there is no transition between the two positions, so that the visual effect is poor. If a and B are connected by simply using linear interpolation, the link between a and B may be very strange and far from the real human action when a and B are far away.
Disclosure of Invention
According to the defects of the prior art, the invention aims to provide a gesture language action connection generation method, which solves the problem of coherent transition of a hand from one position A to the next position B between two standard actions of a 3D virtual human gesture language.
In order to realize the purpose, the technical scheme adopted by the invention is as follows: the gesture language action linkage generation method comprises the following specific steps:
the method comprises the following steps: according to the standard word bank of the gesture language, the hand action range of all the gesture languages is defined, the hand action range is divided into N areas, and the central position of each area is recorded;
step two: collecting hand transitional actions between the central positions of every two areas, and establishing a transitional action library;
step three: and selecting a region according to the hand position, corresponding hand position transition in the standard word sequence to the transition between the regions, reading corresponding hand transition actions in a transition action library, and generating gesture language connection actions.
Furthermore, after the hand motion range is divided into N areas, the hand transition motions between the center positions of every two areas are ((N-1) + (N-2) + … … +2+ 1), namely N (N-1)/2 hand transition motions, so as to form a transition motion library.
Preferably, N is an integer and not less than three.
Further, the gesture language standard word library is a word library formed according to sign language representations of words in the national common sign language vocabulary.
Furthermore, in step three, when the gesture language movement corresponding to the standard word sequence moves across the region, the hand moves from the current position to the target position, first moves from the current position to the center position of the region where the current position is located, then calls the hand transition movement in the transition movement library, moves from the center position of the region where the current position is located to the center position of the region where the target position is located, and finally moves from the center position of the region where the target position is located to the target position.
Further, the movement of the current position to the central position of the area where the current position is located is completed by linear interpolation; and the movement of the central position of the area where the target position is located to the target position is completed by linear interpolation.
Furthermore, in the third step, when the gesture language corresponding to the standard word sequence moves in the same area, the hand moves from the current position to the target position, and the linear interpolation is directly adopted to complete the movement.
Further, the current position is a hand position when the last gesture language action is finished; the target position is a hand position when the next gesture language action is started, and the current position and the target position are hand positions of the same hand.
Further, in the first step, the area is a spatial range area, and the central position of the area is the central position of the spatial range area.
Further, in the second step, the hand transition motion is formed by collecting motion data of each part of the hand of the plurality of objects, sorting, feature analysis, evaluation and element extraction.
Further, in the third step, the area is selected according to the hand position, and when the hand position is located in a certain area, the area is selected as the area; when the hand position is located at the junction of the areas, the linear distance between the hand position and each area is determined, and the area with the shortest linear distance to the hand position is selected as the area.
Further, the hand position is determined by the hand center position, and each area position is determined by the area center position; the center position of the hand is positioned at the junction of the areas, the position of the hand is the same as the linear distance of each area, and the area closest to the area to be transited is selected as the area.
Furthermore, the center of the hand is located at the junction of the areas, and when the distance between each area and the area to be transited is the same, one area is selected as the area to be transited according to the priority set manually.
The invention has the beneficial effects that: the technical scheme of the invention ensures smooth and real connection between the two gesture language actions and can enhance the visual effect. And the transition action can be directly called from the transition action library, so that the calculation amount is reduced, the program is saved, and the operation processing speed is accelerated.
Drawings
FIG. 1 is a schematic view of dividing regions in step one;
FIG. 2 is a gesture language action of the word "wish";
FIG. 3 is a gesture language action of the word "Qigong";
in the figure: a is the position where the gesture language action starts.
Detailed Description
In order to make the structure and function of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention.
The invention provides a gesture language action linkage generation method, which comprises the following specific steps:
the method comprises the following steps: according to the standard word bank of the gesture language, the hand action range of all the gesture languages is defined, the hand action range is divided into N areas, and the central position of each area is recorded;
step two: collecting hand transition actions between the center positions of every two areas, and establishing a transition action library;
step three: and selecting a region according to the hand position, corresponding hand position transition in the standard word sequence to the transition between the regions, reading corresponding hand transition actions in a transition action library, and generating gesture language connection actions.
Furthermore, after the hand motion range is divided into N areas, the hand transition motions between the center positions of every two areas are ((N-1) + (N-2) + … … +2+ 1), namely N (N-1)/2 hand transition motions, so as to form a transition motion library.
Preferably, N is an integer of not less than three.
Further, the gesture language standard word library is a word library formed according to sign language representations of words in the national common sign language vocabulary.
Furthermore, in the third step, when the gesture language movement corresponding to the standard word sequence moves across the region, the hand moves from the current position to the target position, first moves from the current position to the center position of the region where the current position is located, then calls the hand transition movement in the transition movement library, moves from the center position of the region where the current position is located to the center position of the region where the target position is located, and finally moves from the center position of the region where the target position is located to the target position.
Further, the movement of the current position to the central position of the area where the current position is located is completed by linear interpolation; and the movement of the central position of the area where the target position is located to the target position is completed by linear interpolation.
Furthermore, in the third step, when the gesture language corresponding to the standard word sequence moves in the same area, the hand moves from the current position to the target position, and the linear interpolation is directly adopted to complete the movement. When the current position and the target position are located in the same region, linear translation is directly adopted, because the current position and the target position are close to each other in the same region, and the sense of incongruity cannot be seen through direct translation.
Further, the current position is a hand position when the last gesture language action is finished; the target position is a hand position when the next gesture language action is started, and the current position and the target position are hand positions of the same hand.
Further, in the first step, the area is a spatial range area, and the central position of the area is the central position of the spatial range area.
Based on the above technical solution, it should be noted that the divided regions are divided in the space range, but when sign language is done, the starting positions of the hand words are that the hands move in a range not too far away from the front of the body, so the space in front of the body is not large although the division is space.
Further, in the second step, the hand transition motion is formed by collecting motion data of each part of the hand of the plurality of objects, sorting, feature analysis, evaluation and element extraction.
Based on the above technical solution, it should be noted that the hand transitional motion is created by means of motion capture, hand adjustment, and the like, and the number of hand transitional motions is limited, so that the hand transitional motion can be created. The transition in the real sign language can be from any position to any position, and cannot be made, so that the method is needed to generate.
Further, in the third step, the area is selected according to the hand position, and when the hand position is located in a certain area, the area is selected as the area; when the hand position is located at the junction of the areas, the linear distance between the hand position and each area is determined, and the area with the shortest linear distance to the hand position is selected as the area.
Further, the hand position is determined by the hand center position, and each area position is determined by the area center position; the center position of the hand is positioned at the junction of the areas, the position of the hand is the same as the linear distance of each area, and the area closest to the area to be transited is selected as the area.
Furthermore, the center of the hand is located at the junction of the areas, and when the distance between each area and the area to be transited is the same, one area is selected as the area to be transited according to the priority set manually.
Based on the technical scheme, when the hand positions are located at the junctions of a plurality of areas at the same time, one of the hand positions is selected to be closest to the straight line of the center of the area; in the limit situation, if the hand center is at the intersection of a plurality of regions and the straight-line distances from the center of each region are almost the same, the region whose center is closest to the target position is selected as the belonging region. If the centers of the two regions are just as far away from the target position, one region can be selected as the region to which the person belongs.
Referring to fig. 1, the left hand is taken as an example, and a section in front of a human body is divided into 6 areas, K1 and K6. The movement of the hand for making the 3D virtual human from the center Ki of any one area to the center Kj of the other area is collected and used as a transitional movement Gij, and a transitional movement library is formed.
And finding out a region Ka nearest to the hand position A and a region Kb nearest to the hand position B, and finding out a transitional action Gab between the regions Ka and Kb in a transitional action library.
The movement of A to the central area of the area Ka is completed by linear interpolation, and because A is closer to Ka, the connection effect is better even if simple linear interpolation is used.
Similarly, the shift from the center of the region Kb to B also employs linear interpolation.
Thus, the transition from A to B becomes:
Figure DEST_PATH_IMAGE001
therefore, the handle can be transited from the position A to the position B, and the smooth and real effect is achieved.
In the method, the section is divided into 6 areas, and more areas can be divided if the action is more fine in practice.
Referring to fig. 2 and 3, the transition from the gesture word "wish" to the gesture word "qigong" is illustrated. Referring to the area division in fig. 2 and 3, when the "desired" action ends, the right hand is in the K1 area; the right hand is in region K3 at the beginning of the "qigong" action.
When the 'wish' and 'Qigong' need to be expressed in a string, the following method is realized:
1) the right hand is moved linearly directly from the "desired" end position to the center of K1
2) Linking G13 actions in the transitional action library, right hand move to center of K3
3) The right hand is linearly translated directly from the center of K3 to the start of "Qigong
The above list is only the preferred embodiment of the present invention. It is obvious that the invention is not limited to the above embodiments, but that many variations are possible. All modifications which can be derived or suggested by the person skilled in the art from the present disclosure are to be considered within the scope of the present invention.

Claims (10)

1. The gesture language action linkage generation method is characterized by comprising the following specific steps of:
the method comprises the following steps: according to the standard word library of the gesture language, the hand action range of all the gesture languages is defined, the hand action range is divided into N areas, and the central position of each area is recorded;
step two: collecting hand transitional actions between the central positions of every two areas, and establishing a transitional action library;
step three: and selecting a region according to the hand position, corresponding hand position transition in the standard word sequence to the transition between the regions, reading corresponding hand transition actions in a transition action library, and generating gesture language connection actions.
2. The method for generating a gesture language action linkage according to claim 1, wherein in step three, when the gesture language action corresponding to the standard word sequence moves across the region, the hand moves from the current position to the target position, first moves from the current position to the center position of the region where the current position is located, then calls a hand transition action in the transition action library, moves from the center position of the region where the current position is located to the center position of the region where the target position is located, and finally moves from the center position of the region where the target position is located to the target position.
3. The method according to claim 2, wherein the movement of the current position to the center of the area where the current position is located is performed by linear interpolation; and the movement of the central position of the area where the target position is located to the target position is completed by linear interpolation.
4. The method according to claim 1, wherein in step three, when the gesture language movement corresponding to the standard word sequence moves in the same area, the hand moves from the current position to the target position and is directly completed by linear interpolation.
5. The method for generating gesture language action linkage according to any one of claims 2 to 4, wherein the current position is a hand position at the end of the last gesture language action; the target position is a hand position when the next gesture language action is started, and the current position and the target position are hand positions of the same hand.
6. The method according to claim 1, wherein in the first step, the area is a spatial range area, and a center position of the area is a center position of the spatial range area.
7. The gesture language motion engagement generation method according to claim 1, wherein in step two, the hand transition motion is formed by collecting motion data of each part of the hand of a plurality of objects, sorting, feature analysis, evaluation and element extraction.
8. The gesture language action linking generation method according to claim 1, wherein in step three, the area is selected according to the hand position, and when the hand position is located in a certain area, the area is selected as the area; when the hand position is located at the junction of the areas, the linear distance between the hand position and each area is determined, and the area with the shortest linear distance to the hand position is selected as the area.
9. The gesture language action engagement generation method according to claim 9, wherein the hand position is determined by a hand center position, and each region position is determined by a region center position; and when the center position of the hand is positioned at the junction of the areas, selecting the area closest to the area to be transited as the area.
10. The method as claimed in claim 1, wherein the standard lexicon of gesture language is a lexicon formed according to sign language representation of words in the "national common vocabulary of sign language".
CN202210318044.3A 2022-03-29 2022-03-29 Gesture language action linkage generation method Pending CN115035275A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210318044.3A CN115035275A (en) 2022-03-29 2022-03-29 Gesture language action linkage generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210318044.3A CN115035275A (en) 2022-03-29 2022-03-29 Gesture language action linkage generation method

Publications (1)

Publication Number Publication Date
CN115035275A true CN115035275A (en) 2022-09-09

Family

ID=83119822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210318044.3A Pending CN115035275A (en) 2022-03-29 2022-03-29 Gesture language action linkage generation method

Country Status (1)

Country Link
CN (1) CN115035275A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200506765A (en) * 2003-08-11 2005-02-16 Univ Nat Cheng Kung Method for generating and serially connecting sign language images
JP2005099977A (en) * 2003-09-24 2005-04-14 Hitachi Ltd Sign language editing method and device
CN101527092A (en) * 2009-04-08 2009-09-09 西安理工大学 Computer assisted hand language communication method under special session context
CN112073749A (en) * 2020-08-07 2020-12-11 中国科学院计算技术研究所 Sign language video synthesis method, sign language translation system, medium and electronic equipment
CN113407034A (en) * 2021-07-09 2021-09-17 呜啦啦(广州)科技有限公司 Sign language inter-translation method and system
CN113538632A (en) * 2021-06-15 2021-10-22 果不其然无障碍科技(苏州)有限公司 Sign language animation playing method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200506765A (en) * 2003-08-11 2005-02-16 Univ Nat Cheng Kung Method for generating and serially connecting sign language images
JP2005099977A (en) * 2003-09-24 2005-04-14 Hitachi Ltd Sign language editing method and device
CN101527092A (en) * 2009-04-08 2009-09-09 西安理工大学 Computer assisted hand language communication method under special session context
CN112073749A (en) * 2020-08-07 2020-12-11 中国科学院计算技术研究所 Sign language video synthesis method, sign language translation system, medium and electronic equipment
CN113538632A (en) * 2021-06-15 2021-10-22 果不其然无障碍科技(苏州)有限公司 Sign language animation playing method and system
CN113407034A (en) * 2021-07-09 2021-09-17 呜啦啦(广州)科技有限公司 Sign language inter-translation method and system

Similar Documents

Publication Publication Date Title
US11983807B2 (en) Automatically generating motions of an avatar
Tamura et al. Recognition of sign language motion images
CN112230772B (en) Virtual-actual fused teaching aid automatic generation method
Kahlon et al. Machine translation from text to sign language: a systematic review
Nair et al. Conversion of Malayalam text to Indian sign language using synthetic animation
CN111598979B (en) Method, device and equipment for generating facial animation of virtual character and storage medium
CN101527092A (en) Computer assisted hand language communication method under special session context
CN113538641A (en) Animation generation method and device, storage medium and electronic equipment
CN113838174B (en) Audio-driven face animation generation method, device, equipment and medium
CN113378806A (en) Audio-driven face animation generation method and system integrating emotion coding
Zhu et al. Human motion generation: A survey
Vasani et al. Generation of indian sign language by sentence processing and generative adversarial networks
Arkushin et al. Ham2pose: Animating sign language notation into pose sequences
CN115035275A (en) Gesture language action linkage generation method
CN112349294A (en) Voice processing method and device, computer readable medium and electronic equipment
Mao et al. A sketch-based gesture interface for rough 3D stick figure animation
Ayadi et al. Prototype for learning and teaching arabic sign language using 3d animations
Takayama et al. Data augmentation using feature interpolation of individual words for compound word recognition of sign language
Vidalón et al. Continuous sign recognition of brazilian sign language in a healthcare setting
Papadogiorgaki et al. Gesture synthesis from sign language notation using MPEG-4 humanoid animation parameters and inverse kinematics
Glauert et al. Virtual human signing as expressive animation
CN112862960A (en) Virtual human body model-based method and system for hitting acupoints and drawing channels and collaterals
Papadogiorgaki et al. Text-to-sign language synthesis tool
Aoki et al. Development of a CG-animation system for sign language communication between different languages
Papadogiorgaki et al. VSigns–a virtual sign synthesis web tool

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination