WO2018107489A1 - Procédé et appareil pour aider des personnes ayant des troubles de l'audition et de la parole et dispositif électronique - Google Patents

Procédé et appareil pour aider des personnes ayant des troubles de l'audition et de la parole et dispositif électronique Download PDF

Info

Publication number
WO2018107489A1
WO2018107489A1 PCT/CN2016/110475 CN2016110475W WO2018107489A1 WO 2018107489 A1 WO2018107489 A1 WO 2018107489A1 CN 2016110475 W CN2016110475 W CN 2016110475W WO 2018107489 A1 WO2018107489 A1 WO 2018107489A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
display signal
display
person
deaf
Prior art date
Application number
PCT/CN2016/110475
Other languages
English (en)
Chinese (zh)
Inventor
廉士国
***
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201680006924.XA priority Critical patent/CN107223277A/zh
Priority to PCT/CN2016/110475 priority patent/WO2018107489A1/fr
Publication of WO2018107489A1 publication Critical patent/WO2018107489A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute

Definitions

  • the present invention relates to the field of smart device technologies, and in particular, to a deaf-mute assist method, device, and electronic device.
  • Hearing is an important way for humans to perceive the world. Through hearing, human beings can realize the transmission and feedback of thoughts and feelings between people, and avoid dangerous situations in the environment.
  • the number of disabled people with hearing disabilities is the highest among the five disabled people, such as visual disability, disability, and disability.
  • the hearing language disabled group has many obstacles in life due to the shortcomings of listening language ability. Therefore, the hearing language disabled group is in need of help.
  • common hearing aids for deaf-mute people include hearing aids or cochlear implants. These devices are helpful for many deaf people, but they also have certain limitations.
  • the different degrees of disability have different requirements on the parameters of the hearing aid or the cochlear implant, and the user needs a complicated selection process when selecting the corresponding product.
  • the deaf-mute auxiliary equipment in the prior art has certain limitations. How to assist the deaf-mute person to perceive the sound conveniently and quickly is still a problem that is continuously studied by those skilled in the art.
  • Embodiments of the present invention provide a deaf-mute person assisting method, apparatus, and electronic device, which are mainly used to assist a deaf-mute person to perceive sound conveniently and quickly.
  • a method for assisting a deaf person comprising:
  • Display is performed under the driving of the display signal.
  • an AC assist device including:
  • a receiving unit configured to receive a sound
  • a converting unit configured to identify the sound and convert the sound into a display signal according to the recognition result
  • a display unit for displaying under the driving of the display signal.
  • an electronic device comprising: a sound collection device, a display device, a memory, and a processor, a sound collection device, a display device, and a memory coupled to the processor; the memory for storing a computer execution code,
  • the computer-executable code is for controlling the processor to perform the deaf-mute assist method of the first aspect.
  • a storage medium characterized in that the computer software instructions for storing the AC assist device of the second aspect are designed to perform the deaf-mute assist method described in the first aspect. code.
  • a computer program product can be directly loaded into an internal memory of a computer and includes software code, and the computer program can be loaded and executed by a computer to implement the deaf-mute assist method according to the first aspect. .
  • the deaf-mute assisting method provided by the embodiment of the present invention first receives a sound, then recognizes the received sound line and converts the sound into a display signal according to the recognition result, and finally displays the display signal, which is implemented by the present invention.
  • the deaf-mute assist method provided by the example can convert the received sound into a display signal and will display the signal
  • the display is driven to convert the received auditory signal into a visual signal, so that the deaf-mute person can visually see the display content corresponding to the sound. Therefore, the deaf-mute assisting method provided by the embodiment of the present invention can be assisted.
  • the dumb person perceives the sound.
  • the deaf-mute assisting method provided by the embodiment of the present invention does not require a complicated selection process and does not require language training, and thus the present invention is compared to the prior art.
  • the deaf-mute assist method provided can assist the deaf-mute person to perceive the sound conveniently and quickly.
  • FIG. 1 is a flow chart of steps of a deaf-mute assist method according to an embodiment of the present invention
  • FIG. 2 is a second flowchart of steps of a deaf-mute assist method according to an embodiment of the present invention
  • FIG. 3 is a third flowchart of the steps of the deaf-mute assisting method provided by the embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a correspondence relationship between a sound orientation and a display position according to an embodiment of the present invention
  • FIG. 5 is a fourth flowchart of steps of a deaf-mute assist method according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a deaf-mute auxiliary device according to an embodiment of the present invention.
  • FIG. 7 is a second schematic structural diagram of a deaf-mute auxiliary device according to an embodiment of the present invention.
  • FIG. 8 is a third schematic structural diagram of a deaf-mute auxiliary device according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the basic principle of the technical solution provided by the embodiment of the present invention is: identifying the received sound, converting the received sound into a display signal, and displaying the content corresponding to the sound driven by the display signal, thereby making the hoarseness People correspond to sound by watching The visual information perceives the sound.
  • the execution body of the deaf-mute assisting method provided by the embodiment of the present invention may be a deaf-mute auxiliary device or an electronic device that can be used to execute the deaf-mute assisting method.
  • the deaf-mute auxiliary device may be a combination of a central processing unit (CPU), a CPU and a memory in the electronic device, or may be another control unit or module in the electronic device.
  • the foregoing electronic device may be a mobile phone, augmented reality glasses (abbreviation: AR glasses), a personal computer (PC), a netbook, a personal digital assistant, which are assisted by the deaf-mute person by using the method provided by the embodiment of the present invention.
  • AR glasses augmented reality glasses
  • PC personal computer
  • netbook a personal digital assistant
  • server etc.
  • the above electronic device may be a PC, a server, etc., which is installed with a software client or software system or software application that can assist the deaf person, and a specific hardware implementation environment. It can be in the form of a general-purpose computer, or an ASIC, an FPGA, or a programmable extension platform such as Tensilica's Xtensa platform.
  • an embodiment of the present invention provides a deaf-mute assisting method.
  • the deaf-mute assisting method includes the following steps:
  • the sound in the above embodiment may be a voice that is emitted when another person communicates with the user, a voice that is broadcasted, or the like; or a voice in the environment, such as a sound of a car whistle, a barking voice, The sound of thunder and so on.
  • the sound can be received by a sound sensing device such as a microphone (English name: microphone, abbreviation: Mic) or a Mic array.
  • a sound sensing device such as a microphone (English name: microphone, abbreviation: Mic) or a Mic array.
  • the process of recognizing the sound in the above embodiment and converting the sound into the display signal according to the recognition result may be completed inside the deaf-mute auxiliary device, or may be assisted by the remote service device.
  • step S12 can be specifically realized by the following steps: a. Identifying the sound by the internal sound processing device, b The sound is converted into a corresponding display signal according to the recognition result of the sound processing device.
  • step S12 can be specifically implemented by: c. transmitting the sound to the remote server to make a sound with the end server. Identify and convert the sound into a display signal based on the recognition result. d. Receive the display signal sent by the remote server.
  • the remote service device can be a cloud server or the like.
  • the converting the sound into the display signal in the step S12 may be: converting the sound into at least one of a display signal for displaying characters, a display signal for displaying the logo, and a display signal for displaying the dynamic screen.
  • a display signal for displaying characters converting the sound into at least one of a display signal for displaying characters into at least one of a display signal for displaying characters, a display signal for displaying the logo, and a display signal for displaying the dynamic screen.
  • the received sound when the received sound is a voice uttered by a person who communicates face-to-face with the user, the received sound can be converted into a display signal for displaying the text.
  • the received sound when the received sound is a barking sound, the received sound can be converted into a display signal for displaying a logo such as a dog's cartoon drawing.
  • the received sound when the received sound is a sound emitted when the car is walking, the received sound can be converted into a display signal for displaying a dynamic picture of the car walking.
  • the received sound can be expressed more clearly by visual information by a combination of various ways.
  • the received sound when the received sound is a sound emitted when the car is walking, the received sound can be converted into a display signal for displaying a moving picture of the car and a car logo.
  • the received sound it is also possible to convert the received sound into other types of display signals based on the above embodiments, but these are all reasonable modifications of the embodiments of the present invention, and therefore should all belong to the protection of the embodiments of the present invention.
  • the received sound is a sound emitted when the car is walking
  • the received sound can be converted into a display signal for displaying a moving picture of the car and a car logo.
  • the specific display manner of displaying the visual information may be selected based on the execution subject of the deaf-mute assisting method provided by the embodiment of the present invention. For example, when the execution body of the deaf-mute assisting method provided by the above embodiment is a mobile phone, displaying under the driving of the display signal may be performed by driving the screen of the mobile phone through the display signal, for example: When the execution body of the deaf-mute assisting method provided by the above embodiment is an AR glasses, displaying the visual information may be performed by projecting the display content on the lens of the AR glasses by driving the projection display device by the display signal.
  • the deaf-mute assisting method provided by the embodiment of the present invention first receives a sound, then recognizes the received sound line and converts the sound into a display signal according to the recognition result, and finally displays the display signal, which is implemented by the present invention.
  • the deaf-mute assist method provided by the example can convert the received sound into a display signal and display it under the driving of the display signal, that is, the received auditory signal can be converted into a visual signal, thereby enabling the deaf person to pass the vision.
  • the display content corresponding to the sound is seen, so the deaf-mute assisting method provided by the embodiment of the present invention can assist the deaf-mute person to perceive the sound.
  • the deaf-mute assisting method provided by the embodiment of the present invention does not require a complicated selection process and does not require language training, and thus the present invention is compared to the prior art.
  • the deaf-mute assist method provided can assist the deaf-mute person to perceive the sound conveniently and quickly.
  • the sound is recognized in the above step S12, and the sound is converted into a display signal according to the recognition result, which can be specifically implemented by the following steps:
  • step S121 if it is determined that the sound is a speech sound by recognizing the type of the sound, step S122 is performed; and/or, if the sound is determined to be the ambient sound by recognizing the type of the sound, step S123 is executed. That is, step S122 and step S123 in the embodiment of the present invention may be performed either alternatively or alternatively.
  • the speaking voice in the embodiment of the present invention generally refers to a voice that is emitted by a human when talking, speaking, broadcasting news, and the like.
  • the voice may be received after being processed.
  • the voice of the speaker is amplified and output and received at the time of the lecture.
  • sounds are not spoken directly by humans, such sounds also belong to the voices of the embodiments of the present invention.
  • the ambient sound in the embodiment of the present invention is other sounds different from the talking sound, and the received sound is divided into a talking sound and an ambient sound.
  • the ring The sound of the environment can be: the sound of the car whistling, the sound of the dog barking, the sound of thunder, the noise in the environment, and so on.
  • the content of the voice recognition in the foregoing embodiment may be specifically implemented by: e. determining, by using a language type recognition technology, a language type of the received voice, for example, identifying the received voice as Chinese, English, French, etc. . f. Identify the spoken content according to the type of language in which the sound is received and the specific received voice. That is, when the received sound is a voice, the language type of the voice can be recognized first to identify the specific content of the voice.
  • the content of the speech is often complicated, it is difficult to clearly display the corresponding content by means of a logo, a dynamic picture, etc., so in the implementation of the present invention, when the sound is a voice, the sound is converted into a text according to the content of the voice. Thereby the content of the received speech is more clearly displayed.
  • the identifier in the above embodiment may specifically be: a cartoon drawing of a dog, a cartoon drawing of a car, a danger sign, a lightning sign, and the like.
  • the deaf-mute assisting method provided by the above embodiment can assist the deaf-mute person to perceive the speaking sound and various sounds in the environment, but when the user receives the speaking sound in a noisy environment, the received speech may be Contains noise from the environment, which may cause inaccurate content recognition of the voice.
  • the present invention provides a deaf-mute assisting method according to an embodiment of the present invention. Specifically, referring to FIG. 3, the deaf-mute assisting method provided by the embodiment of the present invention is provided on the basis of the deaf-mute assistant shown in FIG. It further includes:
  • the relative person is the person who makes the voice.
  • an image of a relative person may be acquired by one or more of a monocular camera, a binocular camera, a depth camera, an image sensor, and the like.
  • any image capturing device may be used to obtain an image of a relative person.
  • the manner of acquiring the image of the opposite person is not limited, so that the image of the opposite person can be obtained.
  • the exemplary, relative person's image may be a dynamic picture when speaking to a person.
  • step S122 converting the speech sound into a display signal for driving the display character according to the content of the speech sound can be realized by the implementation provided in step S33.
  • the image of the opposite person is recognized to obtain the lip motion of the opposite person, and then when the sound is the voice, the content of the voice is recognized, and according to the content of the voice and the relative person
  • the lip action converts the content of the speech into a display signal corresponding to the text, and since the lip language recognition technology can interpret the relative words spoken by the relative person's lip motion, the accuracy of the conversion can be improved.
  • deaf-mute assisting method provided by the foregoing embodiment further includes:
  • the display in the above step S13 under the driving of the display signal may be specifically implemented by displaying the position of the sound under the driving of the display signal at a corresponding position of the display interface.
  • the display content 41 corresponding to the sound is below the display interface 40; when the sound is located at the front of the user F2, the sound is The corresponding display content 42 is displayed above the display interface 40; when the sound is located on the left side of the user F3, the sound and the sound The corresponding display content 43 is displayed on the left side of the display interface 40; when the sound is located on the right side F4 of the user, the display content 44 corresponding to the sound is displayed on the right side of the display interface 40.
  • the display content is displayed on the corresponding position of the display interface, which can further enable the user to know the orientation of the sound, thereby helping the deaf-mute person to perceive the sound more comprehensively.
  • the deaf-mute assisting method includes:
  • detecting the user's hand motion may specifically: acquiring a dynamic picture of the user by one or more of a monocular camera, a binocular camera, a depth camera, an image sensor, etc., and then acquiring the user according to the dynamic picture of the user.
  • Hand movements In addition, detecting the user's hand movement can also detect the motion parameters such as the acceleration and the rotation angle of the user's hand through the hand wearing device, and acquire the user's hand motion according to the motion parameter.
  • the hand wearing device can be: a ring, a wristband, a data glove, and the like.
  • the process of recognizing the user's hand motion in the above embodiment and converting the user's hand motion into voice according to the recognition result may be completed inside the deaf-mute auxiliary device, or may be assisted by the remote service device.
  • step S52 can be specifically implemented by the following steps: A, assisted by the deaf person The image processing device inside the device recognizes the user's hand motion. B converts the user's hand motion into a corresponding voice according to the recognition result of the image processing apparatus.
  • the step S52 may be specifically implemented by: sending the image to the remote server, so that the remote server is the user.
  • the hand motion recognizes and converts the user's hand motion into voice based on the recognition result.
  • D. Receive the voice sent by the remote server.
  • the remote service device can be a cloud server or the like.
  • the sign language content of the gesture expression can be converted into voice by voice synthesis technology, and the voice is broadcasted through a speaker (English name: Speaker).
  • the sign language content can be converted into a voice and broadcasted, so that the person who does not understand the sign language can learn the content expressed by the sign language deaf and dumb person through the broadcasted voice, thereby further assisting the deaf and mute person to communicate.
  • FIG. 6 shows a possible structural diagram of the deaf-mute assisting device involved in the above embodiment.
  • the deaf-mute aid includes:
  • a receiving unit 61 configured to receive a sound
  • the converting unit 62 is configured to identify the sound and convert the sound into a display signal according to the recognition result;
  • the display unit 63 is configured to perform display under the driving of the display signal.
  • the deaf-mute auxiliary device provided by the embodiment of the present invention includes: a receiving unit, a converting unit, and a display unit, wherein the receiving unit is configured to receive a sound, and the converting unit is configured to identify the sound and convert the sound into a display signal according to the recognition result,
  • the display unit is configured to display under the driving of the display signal, so the deaf-mute auxiliary device provided by the embodiment of the present invention can convert the received auditory signal into a visual signal, thereby making the deaf person By visually seeing the display content corresponding to the sound, the deaf-mute assisting device provided by the embodiment of the present invention can assist the deaf-mute person to perceive the sound.
  • the deaf-mute auxiliary device provided by the embodiment of the present invention does not require a complicated selection process and does not need to perform language training, and thus is compared with the prior art embodiment of the present invention.
  • the deaf-mute aids are provided to assist the deaf and mute to sense sound easily and quickly.
  • the converting unit 62 is specifically configured to identify a type of the sound
  • the converting unit 62 is specifically configured to recognize the content of the talking sound when the sound is a talking sound, convert the talking sound into a display signal for driving the displayed text according to the content of the talking sound; and/or identify the environment when the sound is an ambient sound
  • the category of the sound which converts the ambient sound into a display signal for driving the display identifier according to the category of the ambient sound.
  • the receiving unit 61 is further configured to acquire an image of a relative person; wherein, the opposite person is a person who emits a voice;
  • the converting unit 62 is further configured to acquire a lip motion of the opposite person according to the image of the opposite person;
  • the converting unit 62 is specifically configured to convert the speaking sound into a display signal for driving the display text according to the content of the speaking sound and the lip motion of the opposite person.
  • the receiving unit 61 is further configured to acquire an orientation of the sound
  • the display unit 63 is further configured to perform display on a corresponding position of the display interface under the driving of the display signal according to the orientation of the sound.
  • the converting unit 62 includes: a sending module 71 and a receiving module 72;
  • the sending module 71 is configured to send the sound to the remote server to identify the sound with the end server and convert the sound into a display signal according to the recognition result;
  • the receiving module 72 is configured to receive a display signal sent by the remote server.
  • the deaf-mute assistant 600 further includes: voice broadcast Reporting unit 64;
  • the receiving unit 61 is further configured to detect a user's hand motion
  • the identification unit 62 is further configured to identify a user's hand motion and convert the user's hand motion into voice according to the recognition result;
  • the voice broadcast unit 64 is configured to broadcast the voice.
  • the receiving unit 61 is configured to implement the steps of receiving the sound, acquiring the image of the opposite person, and acquiring the orientation of the sound in the above-described deaf-mute assisting method;
  • the identifying unit 62 is configured to implement the sound recognition in the above-described deaf-mute assisting method And converting the sound into a display signal according to the recognition result, recognizing the type of the sound, recognizing the content of the speech sound, converting the speech sound into a display signal for driving the display text, and identifying the environment sound according to the content of the speech sound, according to The category of the ambient sound converts the ambient sound into a display signal for driving the display mark, acquires the lip motion of the opposite person according to the image of the opposite person, converts the content of the voice according to the content of the voice and the lip motion of the opposite person into the text Corresponding display signal and a step of recognizing the user's hand motion and converting the user's hand motion into voice according to the recognition result;
  • the sending module 71 is configured
  • the receiving unit 61 may be one or more of a Mic, a Mic array, a camera, an image sensor, an ultrasonic detecting device, an infrared camera, and the like.
  • the identification unit 62 may be a processor or a transceiver; the display unit 63 may be a display screen, a laser projection display device; the voice announcement unit 64 may be a speaker or the like.
  • the programs corresponding to the actions performed by the above-mentioned deaf-mute auxiliary device can be stored in software.
  • the deaf-mute auxiliary device is in the memory, so that the processor calls to perform the operations corresponding to the above respective units.
  • FIG. 9 shows a possible structural diagram of an electronic device including the deaf-mute aid device involved in the above embodiment.
  • the electronic device 900 includes a processor 91, a memory 92, a system bus 93, a communication interface 94, a sound collection device 95, and a display device 96.
  • the processor 91 may be a processor or a collective name of a plurality of processing elements.
  • the processor 91 can be a central processing unit (CPU).
  • the processor 91 can also be other general purpose processors, digital signal processing (DSP), application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or Other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, and the like, can implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the present disclosure.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the processor 91 may also be a dedicated processor, which may include at least one of a baseband processing chip, a radio frequency processing chip, and the like.
  • the processor can also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the dedicated processor may also include a chip having other specialized processing functions of the device.
  • the memory 92 is used to store computer execution code
  • the processor 91 is connected to the memory 92 through the system bus 93.
  • the processor 91 is configured to execute the computer execution code stored in the memory 92 to execute any of the embodiments provided by the embodiments of the present invention.
  • a deaf-mute assisting method for example, the processor 91 is configured to support the electronic device to perform step S12 shown in FIG. 1, steps S121, S122, and 123 shown in FIG. 2, and steps S32 and S33 shown in FIG. Step S52 shown in FIG. 5, and/or other processes for the techniques described herein, the specific deaf-mute assisting method may refer to the related descriptions above and in the drawings, and details are not described herein again.
  • System bus 93 can include a data bus, a power bus, a control bus, and a signal status bus. In the present embodiment, for the sake of clarity, various buses are shown in FIG. Means system bus 93.
  • Communication interface 94 may specifically be a transceiver on the device.
  • the transceiver can be a wireless transceiver.
  • the wireless transceiver can be an antenna or the like of the device.
  • the processor 91 communicates with other devices via the communication interface 94, for example, if the device is a module or component of the electronic device, the device is for data interaction with other modules in the electronic device.
  • the steps of the method described in connection with the present disclosure may be implemented in a hardware manner, or may be implemented by a processor executing software instructions.
  • the embodiment of the present invention further provides a storage medium for storing computer software instructions for use in the electronic device shown in FIG. 9, which includes program code designed to execute the deaf-mute assist method provided by any of the above embodiments.
  • the software instructions may be composed of corresponding software modules, and the software modules may be stored in a random access memory (English: random access memory, abbreviation: RAM), flash memory, read only memory (English: read only memory, abbreviation: ROM) , erasable programmable read-only memory (English: erasable programmable ROM, abbreviation: EPROM), electrically erasable programmable read-only memory (English: electrical EPROM, abbreviation: EEPROM), registers, hard disk, mobile hard disk, CD-ROM (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the ASIC can be located in a core network interface device.
  • the processor and the storage medium may also exist as discrete components in the core network interface device.
  • the embodiment of the invention further provides a computer program product, which can be directly loaded into the internal memory of the computer and contains software code, and the computer program can be loaded and executed by the computer to implement the hoarseness provided by any of the above embodiments. Human assisted method.
  • the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
  • the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
  • Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
  • a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention concerne un procédé et un appareil pour aider des personnes ayant des troubles de l'audition et de la parole et un dispositif électronique, destinés à aider des personnes ayant des troubles de l'audition et de la parole à détecter des sons de manière rapide et facile. Le procédé consiste à : recevoir un son (S11) ; identifier le son et convertir le son en un signal d'affichage selon un résultat d'identification (S12) ; et réaliser un affichage sous la commande du signal d'affichage (S13).
PCT/CN2016/110475 2016-12-16 2016-12-16 Procédé et appareil pour aider des personnes ayant des troubles de l'audition et de la parole et dispositif électronique WO2018107489A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680006924.XA CN107223277A (zh) 2016-12-16 2016-12-16 一种聋哑人辅助方法、装置以及电子设备
PCT/CN2016/110475 WO2018107489A1 (fr) 2016-12-16 2016-12-16 Procédé et appareil pour aider des personnes ayant des troubles de l'audition et de la parole et dispositif électronique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/110475 WO2018107489A1 (fr) 2016-12-16 2016-12-16 Procédé et appareil pour aider des personnes ayant des troubles de l'audition et de la parole et dispositif électronique

Publications (1)

Publication Number Publication Date
WO2018107489A1 true WO2018107489A1 (fr) 2018-06-21

Family

ID=59928232

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/110475 WO2018107489A1 (fr) 2016-12-16 2016-12-16 Procédé et appareil pour aider des personnes ayant des troubles de l'audition et de la parole et dispositif électronique

Country Status (2)

Country Link
CN (1) CN107223277A (fr)
WO (1) WO2018107489A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111128180A (zh) * 2019-11-22 2020-05-08 北京理工大学 一种听力障碍者的辅助对话***
CN113011245A (zh) * 2021-01-28 2021-06-22 南京大学 基于超声波感知与知识蒸馏的唇语识别***及方法

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111651A (zh) * 2018-02-01 2019-08-09 周玮 基于体态感知的智能语言交互***
CN108510988A (zh) * 2018-03-22 2018-09-07 深圳市迪比科电子科技有限公司 一种用于聋哑人的语言识别***及方法
CN108596107A (zh) 2018-04-26 2018-09-28 京东方科技集团股份有限公司 基于ar设备的唇语识别方法及其装置、ar设备
CN108877407A (zh) * 2018-06-11 2018-11-23 北京佳珥医学科技有限公司 用于辅助交流的方法、装置和***及增强现实眼镜
CN111679745A (zh) * 2019-03-11 2020-09-18 深圳市冠旭电子股份有限公司 音箱控制方法、装置、设备、可穿戴设备及可读存储介质
CN110020442A (zh) * 2019-04-12 2019-07-16 上海电机学院 一种便携式翻译机
CN110009973A (zh) * 2019-04-15 2019-07-12 武汉灏存科技有限公司 基于手语的实时互译方法、装置、设备及存储介质
CN110351631A (zh) * 2019-07-11 2019-10-18 京东方科技集团股份有限公司 聋哑人交流设备及其使用方法
TWI743624B (zh) * 2019-12-16 2021-10-21 陳筱涵 注意力集中輔助系統
CN111343554A (zh) * 2020-03-02 2020-06-26 开放智能机器(上海)有限公司 一种视觉与语音结合的助听方法及***
CN112185415A (zh) * 2020-09-10 2021-01-05 珠海格力电器股份有限公司 声音可视化方法及装置、存储介质、mr混合现实设备
CN114267323A (zh) * 2021-12-27 2022-04-01 深圳市研强物联技术有限公司 一种用于聋哑人的语音助听ar眼镜及其通信交流方法
CN114615609B (zh) * 2022-03-15 2024-01-30 深圳市昂思科技有限公司 助听器控制方法、助听器设备、装置、设备和计算机介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020103649A1 (en) * 2001-01-31 2002-08-01 International Business Machines Corporation Wearable display system with indicators of speakers
CN101124617A (zh) * 2005-01-21 2008-02-13 L·凯茨 用于耳聋的人的管理和辅助***
CN103946733A (zh) * 2011-11-14 2014-07-23 谷歌公司 在可穿戴计算***上显示声音指示
CN104485104A (zh) * 2014-12-16 2015-04-01 芜湖乐锐思信息咨询有限公司 智能穿戴设备
CN104966433A (zh) * 2015-07-17 2015-10-07 江西洪都航空工业集团有限责任公司 一种辅助聋哑人对话的智能眼镜
CN105324811A (zh) * 2013-05-10 2016-02-10 微软技术许可有限责任公司 语音到文本转换
CN105529035A (zh) * 2015-12-10 2016-04-27 安徽海聚信息科技有限责任公司 一种用于智能穿戴设备的***
CN105765486A (zh) * 2013-09-24 2016-07-13 纽昂斯通讯公司 可穿戴通信增强装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020103649A1 (en) * 2001-01-31 2002-08-01 International Business Machines Corporation Wearable display system with indicators of speakers
CN101124617A (zh) * 2005-01-21 2008-02-13 L·凯茨 用于耳聋的人的管理和辅助***
CN103946733A (zh) * 2011-11-14 2014-07-23 谷歌公司 在可穿戴计算***上显示声音指示
CN105324811A (zh) * 2013-05-10 2016-02-10 微软技术许可有限责任公司 语音到文本转换
CN105765486A (zh) * 2013-09-24 2016-07-13 纽昂斯通讯公司 可穿戴通信增强装置
CN104485104A (zh) * 2014-12-16 2015-04-01 芜湖乐锐思信息咨询有限公司 智能穿戴设备
CN104966433A (zh) * 2015-07-17 2015-10-07 江西洪都航空工业集团有限责任公司 一种辅助聋哑人对话的智能眼镜
CN105529035A (zh) * 2015-12-10 2016-04-27 安徽海聚信息科技有限责任公司 一种用于智能穿戴设备的***

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111128180A (zh) * 2019-11-22 2020-05-08 北京理工大学 一种听力障碍者的辅助对话***
CN113011245A (zh) * 2021-01-28 2021-06-22 南京大学 基于超声波感知与知识蒸馏的唇语识别***及方法
CN113011245B (zh) * 2021-01-28 2023-12-12 南京大学 基于超声波感知与知识蒸馏的唇语识别***及方法

Also Published As

Publication number Publication date
CN107223277A (zh) 2017-09-29

Similar Documents

Publication Publication Date Title
WO2018107489A1 (fr) Procédé et appareil pour aider des personnes ayant des troubles de l'audition et de la parole et dispositif électronique
US9805619B2 (en) Intelligent glasses for the visually impaired
US11043231B2 (en) Speech enhancement method and apparatus for same
EP2842055B1 (fr) Système de traduction instantanée
US20150379896A1 (en) Intelligent eyewear and control method thereof
US10304452B2 (en) Voice interactive device and utterance control method
US9307073B2 (en) Visual assistance systems and related methods
WO2017142775A1 (fr) Assistance auditive à transcription de parole automatisée
US20190019512A1 (en) Information processing device, method of information processing, and program
US20190138603A1 (en) Coordinating Translation Request Metadata between Devices
CN114115515A (zh) 用于帮助用户的方法和头戴式单元
WO2015143114A1 (fr) Appareil de traduction de la langue des signes utilisant des verres optiques intelligents comme écran et doté d'une caméra et éventuellement d'un microphone
US20170024380A1 (en) System and method for the translation of sign languages into synthetic voices
Salvi et al. Smart glass using IoT and machine learning technologies to aid the blind, dumb and deaf
US20180167745A1 (en) A head mounted audio acquisition module
CN113763940A (zh) 一种用于ar眼镜的语音信息处理方法及***
CN111081120A (zh) 一种协助听说障碍人士交流的智能穿戴设备
JP2021117371A (ja) 情報処理装置、情報処理方法および情報処理プログラム
CN113220912A (zh) 一种交互辅助方法、装置及计算机可读存储介质
JP2015011651A (ja) 情報処理装置、情報処理方法およびプログラム
CN210072245U (zh) 一种翻译眼镜
KR102000282B1 (ko) 청각 기능 보조용 대화 지원 장치
KR101410321B1 (ko) 무성음성인식 및 발성장치 및 방법
JP7070402B2 (ja) 情報処理装置
Sneha et al. AI-powered smart glasses for blind, deaf, and dumb

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16923641

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/11/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16923641

Country of ref document: EP

Kind code of ref document: A1