CN111144367A - Auxiliary semantic recognition method based on gesture recognition - Google Patents

Auxiliary semantic recognition method based on gesture recognition Download PDF

Info

Publication number
CN111144367A
CN111144367A CN201911413012.6A CN201911413012A CN111144367A CN 111144367 A CN111144367 A CN 111144367A CN 201911413012 A CN201911413012 A CN 201911413012A CN 111144367 A CN111144367 A CN 111144367A
Authority
CN
China
Prior art keywords
gesture
amplitude
speed
recognition
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911413012.6A
Other languages
Chinese (zh)
Other versions
CN111144367B (en
Inventor
吴怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Daniu Cognitive Technology Co.,Ltd.
Original Assignee
Chongqing Best Daniel Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Best Daniel Robot Co Ltd filed Critical Chongqing Best Daniel Robot Co Ltd
Priority to CN201911413012.6A priority Critical patent/CN111144367B/en
Publication of CN111144367A publication Critical patent/CN111144367A/en
Application granted granted Critical
Publication of CN111144367B publication Critical patent/CN111144367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/18Legal services

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Social Psychology (AREA)
  • Databases & Information Systems (AREA)
  • Technology Law (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of human-computer interaction, and particularly discloses an auxiliary semantic recognition method based on gesture recognition, which comprises the following steps of: s1, gesture demonstration: playing a gesture demonstration video before inputting; s2, image acquisition: acquiring image data in real time; s3, image processing step: preprocessing image data; s4, gesture recognition: recognizing the gesture of the gesture recognition area and recording the motion trail information of the gesture; inputting the gesture and the motion trail information of the gesture into a preset gesture recognition model and acquiring a recognition result; s5, gesture judging step: judging whether the motion track information meets the standard or not, and if not, sending a prompt; s6, semantic recognition: and acquiring the text and extracting the consultation key words, matching the consultation key words with the legal key words in a prestored legal word bank, and acquiring legal provisions corresponding to the successfully matched legal key words. By adopting the technical scheme of the invention, real-time identification can be ensured at low cost.

Description

Auxiliary semantic recognition method based on gesture recognition
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to an auxiliary semantic recognition method based on gesture recognition.
Background
Since the law has higher profession, the layman is difficult to master and flexibly use. Therefore, people often need to ask help attorneys to consult related legal problems when encountering the legal problems. However, since the number of practitioners of lawyers is relatively small and the cost of consultation is relatively expensive, the counseling service of lawyers cannot satisfy all people's counseling requirements. In order to solve the problem of legal consultation of the common public, the legal consultation robot capable of providing self-service comes along.
When people consult legal questions through the legal consultation robot, the consultant needs to input consultation questions, and the system can automatically match corresponding answers according to the consultation questions and send the answers to the consultant.
At present, the modes for inputting the consultation problems by the consultant mainly comprise keyboard input and voice input. But for some special people, the problem that the keyboard input or the voice input cannot be used still exists; for example, for deaf-mutes with low cultural degree, keyboard input is difficult, and voice input cannot be performed. In order to make all people successfully perform online consultation, an additional input mode, such as gesture recognition, needs to be introduced.
However, the calculation process of gesture recognition is complex, the requirement on real-time performance is high, and if the speed of user input for gesture input is too high, if the calculation capability of the legal consultation robot is insufficient, the processing time of each frame of image may be too long, so that the problem of real-time recognition of gesture input of a consultant cannot be solved. Moreover, the use frequency of gesture recognition is far lower than that of traditional input modes such as keyboard input and voice input, and if the law is upgraded to consult the hardware of the robot to improve the computing capability, additional cost is increased.
For this reason, a low-cost method capable of real-time identification is required.
Disclosure of Invention
The invention provides an auxiliary semantic recognition method based on gesture recognition, which can ensure real-time recognition at low cost.
In order to solve the technical problem, the present application provides the following technical solutions:
the auxiliary semantic recognition method based on gesture recognition comprises the following steps:
s1, gesture demonstration: playing a gesture demonstration video before inputting;
s2, image acquisition: acquiring image data in real time;
s3, image processing step: preprocessing image data and dividing a gesture recognition area;
s4, gesture recognition: recognizing the gesture of the gesture recognition area and recording the motion trail information of the gesture; inputting the gesture and the motion trail information of the gesture into a preset gesture recognition model and acquiring a recognition result; the recognition result is a text;
s5, gesture judging step: judging whether the motion track information meets the standard or not, and if not, sending a prompt;
s6, semantic recognition: acquiring a text and extracting a consultation keyword, matching the consultation keyword with a legal keyword in a prestored legal word bank, and acquiring a legal provision corresponding to the successfully matched legal keyword; and outputting the legal provisions.
The basic scheme principle and the beneficial effects are as follows:
in the scheme, the user can know how to perform standard gesture input by watching the gesture demonstration video, so that the probability that the user performs gesture input too fast is reduced. When the gesture of the user is not in accordance with the standard, reminding is carried out, and the user can be further guided to adjust the current gesture input speed and the like. By effectively guiding the user, the gesture input of the user can be matched with the current processing capacity, and the problem that the real-time gesture input cannot be recognized due to insufficient computing capacity is avoided. Moreover, the processing capacity of hardware does not need to be improved, and the cost is reduced.
Further, the method comprises the step of S0, an information acquisition step, wherein personal data information of the user is acquired, the personal data information comprises names and disability types, and the disability types comprise hearing and speech.
By collecting the personal data information of the user, the disability type of the user can be known.
Further, in S5, the method further includes the following sub-steps:
s501, judging whether the movement speed exceeds a first threshold value or not based on the movement track information;
s502, if the gesture speed exceeds a first threshold value, reminding is carried out at an excessively high gesture playing speed;
through setting for first threshold value, can effectively distinguish whether the speed that the user made the gesture is too fast, play gesture speed when too fast and too fast remind, can remind the user to reduce speed.
Further, in S501, it is also determined whether the motion amplitude is lower than a second threshold value based on the motion trajectory information,
in S502, if the gesture amplitude is lower than the second threshold, a reminder that the gesture amplitude is too small is also played.
Through the second threshold value, whether the gesture making amplitude of the user is too small can be effectively distinguished, the gesture amplitude too small reminding is played when the gesture making amplitude is too small, and the user can be reminded to increase the gesture amplitude.
Further, still include:
s503, after the prompt that the gesture playing speed is too high is given, whether the motion speed exceeds a first threshold value is judged continuously based on the motion track information;
s504, if the first threshold value is exceeded, playing the speed guide file.
After the user receives the prompt that the gesture speed is too high, the input speed of the gesture is still too high, and it may be unclear what speed is proper, and the user can be helped to find the proper speed by playing the speed guide file.
Further, in S504, the speed guidance file is speed guidance music or speed guidance video.
By listening to the guide music or watching the guide video, the user can be helped to find a proper rhythm, thereby reducing the gesture-making speed.
Further, the method comprises S500, judging whether the user is disabled in hearing based on the personal data of the user;
in S504, when the guide file is played, if the guide file is disabled by hearing, the speed guide video is played; if not, the play speed guides the music.
The users with hearing disabilities cannot hear the speed guide music, and can play speed guide videos for the users with hearing disabilities, so that the users with hearing disabilities can also find a proper rhythm.
Further, the S0 includes the following sub-steps:
s001, collecting a disability card image;
s002, identifying characters in the disability card image;
s003, extracting the personal data information from the recognized characters.
The disabled person is inconvenient to input, personal data information is directly extracted from the disabled card in an image recognition mode, and convenience is further provided for the disabled person to use.
Further, S505 is included, after the gesture amplitude undersize prompt is played, whether the motion amplitude is lower than a second threshold value is judged continuously based on the motion track information;
s504, if the amplitude is lower than the second threshold value, playing the amplitude guide file; the amplitude guide file is amplitude guide music or amplitude guide video; if the hearing disability exists, playing an amplitude guide video; if not, the amplitude guidance music is played.
The user with the hearing disability cannot hear the amplitude guide music and plays the amplitude guide video for the user with the hearing disability, so that the user with the hearing disability can find the proper gesture amplitude by watching the amplitude guide video.
Further, the volume of the amplitude-directed music is inversely proportional to the amplitude of the motion, and the brightness of the amplitude-directed video is inversely proportional to the amplitude of the motion.
By means of the reverse excitation mode, the user can be effectively guided to expand the gesture beating amplitude.
Drawings
Fig. 1 is a flowchart of a first embodiment of an auxiliary semantic recognition method based on gesture recognition.
Detailed Description
The following is further detailed by way of specific embodiments:
example one
As shown in fig. 1, the auxiliary semantic recognition method based on gesture recognition of the present embodiment includes the following steps:
and S0, an information acquisition step, wherein personal data information of the user is acquired, the personal data information comprises name, gender, age, disability types and the like, and the disability types comprise hearing, speech, limbs, intelligence, multiple functions and the like.
In this embodiment, the method specifically includes the following substeps:
s001, collecting a disability card image;
s002, identifying characters in the disability card image;
s003, extracting the personal data information from the recognized characters.
S1, gesture demonstration: playing a gesture demonstration video before user input; in this embodiment, the gesture demonstration video is a demonstration video in which the amplitude of the pre-recorded gesture and the speed of the gesture both meet the standard. For the user to refer to this video.
S2, image acquisition: acquiring image data in real time;
s3, image processing step: preprocessing image data and dividing a gesture recognition area;
s4, gesture recognition: recognizing the gesture of the gesture recognition area and recording the motion trail information of the gesture; inputting the gesture and the motion trail information of the gesture into a preset gesture recognition model and acquiring a recognition result; the recognition result is a text; in this embodiment, the motion trajectory information includes a motion direction, a motion speed, and a motion trajectory coordinate.
S5, gesture judging step: and judging whether the motion track information meets the standard or not, and if not, sending a prompt.
In this embodiment, the method specifically includes the following substeps:
s500, judging whether the user is disabled in hearing based on the personal data of the user;
s501, judging whether the motion speed exceeds a first threshold value or not based on the motion track information and judging whether the motion amplitude is lower than a second threshold value or not based on the motion track information;
s502, if the gesture speed exceeds a first threshold value, reminding is carried out at an excessively high gesture playing speed; and if the gesture amplitude is lower than the second threshold value, a prompt with too small gesture amplitude is also played. In this embodiment, the prompt with too fast gesture speed and the prompt with too small gesture amplitude both include a voice version and a text version. For example, the text for the prompt with too fast gesture speed is "you gesture speed too fast, please decrease speed". If the user is hearing-disabled, the text version is played when the prompt with too high gesture speed and the prompt with too small gesture amplitude are played; if the user is not disabled, the voice edition is played.
S503, after the prompt that the gesture playing speed is too high is given, whether the motion speed exceeds a first threshold value is judged continuously based on the motion track information;
s504, if the first threshold value is exceeded, playing the speed guide file. The tempo guide file is tempo guide music or tempo guide video. When the guide file is played, if the guide file is disabled by hearing, playing the speed guide video; if not, the play speed guides the music. In this embodiment, the tempo guide music is music with drumheads, and the tempo guide video is a video of a percussion instrument, in which the rhythm of the drumheads and the rhythm of the percussion are in accordance with the tempo of the gesture in accordance with the standard. For example, in tempo-guided music, a gesture input interval for each word may be accompanied by a heavy bass sound; in the speed guidance video, a single frame of a drum is accompanied by a gesture input interval for each word. In other embodiments, other ways may be used, such as tempo leading music being music that repeats "1, 2, 3, 4" continuously; the speed guide video is a video of a music waveform.
S6, semantic recognition: acquiring a text and extracting a consultation keyword, matching the consultation keyword with a legal keyword in a prestored legal word bank, and acquiring a legal provision corresponding to the successfully matched legal keyword; and outputting the legal provisions.
The embodiment also provides an auxiliary semantic recognition system based on gesture recognition, which comprises a character recognition module, an image acquisition module, a gesture recognition module, a semantic recognition module and a demonstration module.
The image acquisition module is used for acquiring the images of the disabled cards, and the character recognition module is used for recognizing characters in the images of the disabled cards and extracting personal data information from the recognized characters. In this embodiment, the profile information includes name, sex, age, and types of disabilities, and the types of disabilities include hearing, speech, physical, mental, and multiplex.
The image acquisition module is also used for acquiring image data in real time. The demonstration module is used for playing the gesture demonstration video before the image acquisition module acquires the image data in real time. In this embodiment, the gesture demonstration video is a demonstration video in which the amplitude of the pre-recorded gesture and the speed of the gesture both meet the standard. For the user to refer to.
The image processing module is used for acquiring image data and preprocessing the image data. The preprocessing includes gray scale conversion, noise reduction and sharpening. In this embodiment, median filtering is adopted when denoising. The median filter can filter noise such as speckle and salt in the image data, and is a typical nonlinear digital filter. The principle of median filtering is to determine a domain window w, which takes pixel point p (i, j) as the center, arranges all pixel gray values in the window in the order from small to large, selects the middle value as the gray new value of the pixel center point, and can filter the image when the window is moved.
The image processing module is used for dividing the gesture recognition area from the image data after preprocessing. The image processing module is used for identifying the neck, the top of the head and the waist of the human body when dividing the gesture identification area; the image processing module is further configured to extend the neck left and right by 65-70cm respectively as a transverse boundary (in this embodiment, specifically 65 cm); the image processing module is further configured to use the parietal and lumbar regions as longitudinal boundaries.
The gesture recognition module is used for recognizing the gesture of the gesture recognition area and recording the motion trail information of the gesture; in this embodiment, the motion trajectory information includes a motion direction, a motion speed, and a motion trajectory coordinate.
The gesture recognition module judges whether the movement speed exceeds a first threshold value or not based on the movement track information, and if the movement speed exceeds the first threshold value, the gesture recognition module is further used for sending a speed reminding instruction to the demonstration module; the gesture recognition module is also used for judging whether the motion amplitude is lower than a second threshold value or not based on the motion track information, and if the motion amplitude is lower than the second threshold value, the gesture recognition module is also used for sending an amplitude reminding instruction to the demonstration module; in this embodiment, the motion amplitude is obtained by calculating the motion distance between two motion trajectory coordinates.
The demonstration module is also used for playing the prompt with too high gesture speed according to the speed prompt instruction and playing the prompt with too small gesture amplitude according to the amplitude prompt instruction. In this embodiment, the prompt with too fast gesture speed and the prompt with too small gesture amplitude both include a voice version and a text version. For example, the text for the prompt with too fast gesture speed is "you gesture speed too fast, please decrease speed".
The demonstration module is also used for acquiring personal data information before playing the prompt with the too high gesture speed and the prompt with the too small gesture amplitude, judging whether the user is hearing-disabled or not based on the personal data information, and if the user is hearing-disabled, playing the text version when the demonstration module plays the prompt with the too high gesture speed and plays the prompt with the too small gesture amplitude; if the user is not disabled, the voice edition is played.
When the demonstration module plays the prompt that the gesture speed is too high, the gesture recognition module is further used for continuously judging whether the movement speed exceeds a first threshold value, and if the movement speed exceeds the first threshold value; the gesture recognition module is further used for sending a speed guide instruction to the demonstration module, and the demonstration module is further used for playing a speed guide file according to the speed guide instruction. In the present embodiment, the tempo guide file is tempo guide music and tempo guide video. Specifically, if the user is hearing-impaired, the presentation module plays a speed guidance video; if the user is not hearing-impaired, the presentation module plays the speed-guided music. In this embodiment, the tempo guide music is music with drumheads, and the tempo guide video is a video of a percussion instrument, in which the rhythm of the drumheads and the rhythm of the percussion are in accordance with the tempo of the gesture in accordance with the standard. For example, in tempo-guided music, a gesture input interval for each word may be accompanied by a heavy bass sound; in the speed guidance video, a single frame of a drum is accompanied by a gesture input interval for each word. In other embodiments, other ways may be used, such as tempo leading music being music that repeats "1, 2, 3, 4" continuously; the speed guide video is a video of a music waveform.
In this embodiment, the speed of the gesture meeting the standard needs to be determined according to the processing capability of the gesture recognition module, if the hardware configuration of the gesture recognition module is high, the processing capability is strong, the speed of the gesture can be appropriately increased, and if the hardware configuration of the gesture recognition module is low, the processing capability is weak, the speed of the gesture can be appropriately decreased.
The gesture recognition module is also used for inputting the gesture and the motion track information of the gesture into a preset gesture recognition model and acquiring a recognition result; the recognition result is a text. The preset gesture recognition module in this embodiment adopts a trained convolutional neural network model.
The semantic recognition module is used for acquiring the text and extracting the consultation key words, and is also pre-stored with a legal word bank, wherein the legal word bank comprises a plurality of legal key words and legal provisions corresponding to the legal key words; the semantic recognition module is also used for matching the consultation keywords with the legal keywords in the legal word bank and acquiring legal provisions corresponding to the successfully matched legal keywords; the semantic recognition module is also used for outputting legal provisions. The answers to the questions can be further matched and sent to the user subsequently according to the output legal provisions, and subsequent processing does not belong to the content of the scheme and is not discussed here.
Example two
The difference between this embodiment and the first embodiment is that this embodiment further includes:
s505, after the gesture too small amplitude prompt is played, whether the motion amplitude is lower than a second threshold value is judged continuously based on the motion track information;
s504, if the amplitude is lower than the second threshold value, playing the amplitude guide file; the amplitude guide file is amplitude guide music or amplitude guide video; if the hearing disability exists, playing an amplitude guide video; if not, the amplitude guidance music is played. The volume of the amplitude-directed music is inversely proportional to the amplitude of the motion, and the brightness of the amplitude-directed video is inversely proportional to the amplitude of the motion. Specifically, the smaller the motion amplitude is, the greater the volume of the amplitude guide music is, and the smaller the motion amplitude is, the greater the brightness of the amplitude guide video is. When user's the speed of making a gesture was too fast, the volume of guide music was big, and big volume can be relatively noisy this moment, and for better experience, the user can think to reduce the volume, in order to reduce the volume, just need reduce the speed of making a gesture, has just also reached the purpose that the guide user reduces the gesture speed, and the range guide video is the same reason.
EXAMPLE III
The present embodiment is different from the second embodiment in that the amplitude guide music and the tempo guide music are the same music in the present embodiment, and in that the volume changes when the music is guided as amplitude. The amplitude guide video and the speed guide video are the same video, and are different in that the brightness changes as the amplitude guide video. When the user has the motion speed exceeding the first threshold value and the motion amplitude being lower than the second threshold value at the same time, two different kinds of music or videos do not need to be played, and therefore conflict is avoided.
The above are merely examples of the present invention, and the present invention is not limited to the field related to this embodiment, and the common general knowledge of the known specific structures and characteristics in the schemes is not described herein too much, and those skilled in the art can know all the common technical knowledge in the technical field before the application date or the priority date, can know all the prior art in this field, and have the ability to apply the conventional experimental means before this date, and those skilled in the art can combine their own ability to perfect and implement the scheme, and some typical known structures or known methods should not become barriers to the implementation of the present invention by those skilled in the art in light of the teaching provided in the present application. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (10)

1. The auxiliary semantic recognition method based on gesture recognition is characterized by comprising the following steps of:
s1, gesture demonstration: playing a gesture demonstration video before inputting;
s2, image acquisition: acquiring image data in real time;
s3, image processing step: preprocessing image data and dividing a gesture recognition area;
s4, gesture recognition: recognizing the gesture of the gesture recognition area and recording the motion trail information of the gesture; inputting the gesture and the motion trail information of the gesture into a preset gesture recognition model and acquiring a recognition result; the recognition result is a text;
s5, gesture judging step: judging whether the motion track information meets the standard or not, and if not, sending a prompt;
s6, semantic recognition: acquiring a text and extracting a consultation keyword, matching the consultation keyword with a legal keyword in a prestored legal word bank, and acquiring a legal provision corresponding to the successfully matched legal keyword; and outputting the legal provisions.
2. The auxiliary semantic recognition method based on gesture recognition according to claim 1, characterized in that: the method further comprises the step of S0, an information collection step, wherein personal data information of the user is collected, the personal data information comprises names and disability types, and the disability types comprise hearing and speech.
3. The auxiliary semantic recognition method based on gesture recognition according to claim 2, characterized in that: in S5, the method further includes the following substeps:
s501, judging whether the movement speed exceeds a first threshold value or not based on the movement track information;
s502, if the gesture speed exceeds a first threshold value, reminding is carried out at an excessively high gesture playing speed;
through setting for first threshold value, can effectively distinguish whether the speed that the user made the gesture is too fast, play gesture speed when too fast and too fast remind, can remind the user to reduce speed.
4. The auxiliary semantic recognition method based on gesture recognition according to claim 3, characterized in that: in S501, it is further determined whether the motion amplitude is lower than a second threshold based on the motion trajectory information,
in S502, if the gesture amplitude is lower than the second threshold, a reminder that the gesture amplitude is too small is also played.
5. The auxiliary semantic recognition method based on gesture recognition according to claim 4, characterized in that: further comprising:
s503, after the prompt that the gesture playing speed is too high is given, whether the motion speed exceeds a first threshold value is judged continuously based on the motion track information;
s504, if the first threshold value is exceeded, playing the speed guide file.
6. The auxiliary semantic recognition method based on gesture recognition according to claim 5, characterized in that: in S504, the speed guidance file is speed guidance music or speed guidance video.
7. The method for assisted semantic recognition based on gesture recognition according to claim 6, characterized in that: s500, judging whether the user is disabled in hearing based on the personal data of the user;
in S504, when the guide file is played, if the guide file is disabled by hearing, the speed guide video is played; if not, the play speed guides the music.
8. The auxiliary semantic recognition method based on gesture recognition according to claim 7, characterized in that: the S0 includes the following sub-steps:
s001, collecting a disability card image;
s002, identifying characters in the disability card image;
s003, extracting the personal data information from the recognized characters.
9. The auxiliary semantic recognition method based on gesture recognition according to claim 8, characterized in that: s505, after the gesture amplitude undersize prompt is played, whether the motion amplitude is lower than a second threshold value is judged continuously based on the motion track information;
s504, if the amplitude is lower than the second threshold value, playing the amplitude guide file; the amplitude guide file is amplitude guide music or amplitude guide video; if the hearing disability exists, playing an amplitude guide video; if not, the amplitude guidance music is played.
10. The auxiliary semantic recognition method based on gesture recognition according to claim 9, characterized in that: the volume of the amplitude-directed music is inversely proportional to the amplitude of the motion, and the brightness of the amplitude-directed video is inversely proportional to the amplitude of the motion.
CN201911413012.6A 2019-12-31 2019-12-31 Auxiliary semantic recognition method based on gesture recognition Active CN111144367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911413012.6A CN111144367B (en) 2019-12-31 2019-12-31 Auxiliary semantic recognition method based on gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911413012.6A CN111144367B (en) 2019-12-31 2019-12-31 Auxiliary semantic recognition method based on gesture recognition

Publications (2)

Publication Number Publication Date
CN111144367A true CN111144367A (en) 2020-05-12
CN111144367B CN111144367B (en) 2021-02-23

Family

ID=70522634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911413012.6A Active CN111144367B (en) 2019-12-31 2019-12-31 Auxiliary semantic recognition method based on gesture recognition

Country Status (1)

Country Link
CN (1) CN111144367B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324206A (en) * 2020-02-28 2020-06-23 重庆百事得大牛机器人有限公司 Gesture interaction-based confirmation information identification system and method
CN112286360A (en) * 2020-11-04 2021-01-29 北京沃东天骏信息技术有限公司 Method and apparatus for operating a mobile device
CN112691002A (en) * 2021-03-24 2021-04-23 上海傅利叶智能科技有限公司 Control method and device based on gesture interaction rehabilitation robot and rehabilitation robot
CN113589931A (en) * 2021-07-30 2021-11-02 重庆长安汽车软件科技有限公司 Gesture function display module, in-vehicle gesture system and automobile
CN113591822A (en) * 2021-10-08 2021-11-02 广州市简筱网络科技有限公司 Special crowd gesture interaction information consultation and recognition system
CN113920799A (en) * 2021-10-29 2022-01-11 漳州爱果冻信息科技有限公司 Simulation experiment teaching system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0934863A (en) * 1995-07-14 1997-02-07 Hitachi Ltd Information integral processing method by neural network
CN106254960A (en) * 2016-08-30 2016-12-21 福州瑞芯微电子股份有限公司 A kind of video call method for communication disorders and system
CN107169449A (en) * 2017-05-14 2017-09-15 吉林大学 Chinese sign language interpretation method based on depth transducer
CN109086371A (en) * 2018-07-20 2018-12-25 吴怡 A kind of semantic net interactive system and exchange method for legal advice
CN110598576A (en) * 2019-08-21 2019-12-20 腾讯科技(深圳)有限公司 Sign language interaction method and device and computer medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0934863A (en) * 1995-07-14 1997-02-07 Hitachi Ltd Information integral processing method by neural network
CN106254960A (en) * 2016-08-30 2016-12-21 福州瑞芯微电子股份有限公司 A kind of video call method for communication disorders and system
CN107169449A (en) * 2017-05-14 2017-09-15 吉林大学 Chinese sign language interpretation method based on depth transducer
CN109086371A (en) * 2018-07-20 2018-12-25 吴怡 A kind of semantic net interactive system and exchange method for legal advice
CN110598576A (en) * 2019-08-21 2019-12-20 腾讯科技(深圳)有限公司 Sign language interaction method and device and computer medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324206A (en) * 2020-02-28 2020-06-23 重庆百事得大牛机器人有限公司 Gesture interaction-based confirmation information identification system and method
CN111324206B (en) * 2020-02-28 2023-07-18 重庆百事得大牛机器人有限公司 System and method for identifying confirmation information based on gesture interaction
CN112286360A (en) * 2020-11-04 2021-01-29 北京沃东天骏信息技术有限公司 Method and apparatus for operating a mobile device
WO2022095674A1 (en) * 2020-11-04 2022-05-12 北京沃东天骏信息技术有限公司 Method and apparatus for operating mobile device
CN112691002A (en) * 2021-03-24 2021-04-23 上海傅利叶智能科技有限公司 Control method and device based on gesture interaction rehabilitation robot and rehabilitation robot
CN112691002B (en) * 2021-03-24 2021-06-29 上海傅利叶智能科技有限公司 Control device based on gesture interaction rehabilitation robot and rehabilitation robot
CN113589931A (en) * 2021-07-30 2021-11-02 重庆长安汽车软件科技有限公司 Gesture function display module, in-vehicle gesture system and automobile
CN113591822A (en) * 2021-10-08 2021-11-02 广州市简筱网络科技有限公司 Special crowd gesture interaction information consultation and recognition system
CN113591822B (en) * 2021-10-08 2022-02-08 广州市简筱网络科技有限公司 Special crowd gesture interaction information consultation and recognition system
CN113920799A (en) * 2021-10-29 2022-01-11 漳州爱果冻信息科技有限公司 Simulation experiment teaching system

Also Published As

Publication number Publication date
CN111144367B (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN111144367B (en) Auxiliary semantic recognition method based on gesture recognition
CN108805089B (en) Multi-modal-based emotion recognition method
CN108877801B (en) Multi-turn dialogue semantic understanding subsystem based on multi-modal emotion recognition system
CN108899050B (en) Voice signal analysis subsystem based on multi-modal emotion recognition system
CN108805087B (en) Time sequence semantic fusion association judgment subsystem based on multi-modal emotion recognition system
US11226673B2 (en) Affective interaction systems, devices, and methods based on affective computing user interface
US10013977B2 (en) Smart home control method based on emotion recognition and the system thereof
Kamaruddin et al. Cultural dependency analysis for understanding speech emotion
CN107203953A (en) It is a kind of based on internet, Expression Recognition and the tutoring system of speech recognition and its implementation
CN111158490B (en) Auxiliary semantic recognition system based on gesture recognition
CN113380271B (en) Emotion recognition method, system, device and medium
US11455472B2 (en) Method, device and computer readable storage medium for presenting emotion
CN107945790A (en) A kind of emotion identification method and emotion recognition system
CN106157956A (en) The method and device of speech recognition
KR20190108523A (en) Server and method for providing feeling analysis based emotional diary service using artificial intelligence based on speech signal
CN110634472A (en) Voice recognition method, server and computer readable storage medium
CN111145903A (en) Method and device for acquiring vertigo inquiry text, electronic equipment and inquiry system
Abdulsalam et al. Emotion recognition system based on hybrid techniques
Hantke et al. EAT- The ICMI 2018 Eating Analysis and Tracking Challenge
CN113743267A (en) Multi-mode video emotion visualization method and device based on spiral and text
Pao et al. A comparative study of different weighting schemes on KNN-based emotion recognition in Mandarin speech
JP2002244842A (en) Voice interpretation system and voice interpretation program
US20230290371A1 (en) System and method for automatically generating a sign language video with an input speech using a machine learning model
US11658928B2 (en) Virtual content creation method
JP2002244841A (en) Voice indication system and voice indication program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Office 1, 18, building 7, Zhongyu xiangnai mansion, No. 101, Xinji Avenue, Longxi street, Yubei District, Chongqing

Patentee after: Chongqing Daniu Cognitive Technology Co.,Ltd.

Address before: Office 1, 18, building 7, Zhongyu xiangnai mansion, No. 101, Xinji Avenue, Longxi street, Yubei District, Chongqing

Patentee before: Chongqing best Daniel robot Co.,Ltd.