CN111492673A - Method for transmitting processing states in a hearing matching application for a hearing device - Google Patents

Method for transmitting processing states in a hearing matching application for a hearing device Download PDF

Info

Publication number
CN111492673A
CN111492673A CN201880081448.7A CN201880081448A CN111492673A CN 111492673 A CN111492673 A CN 111492673A CN 201880081448 A CN201880081448 A CN 201880081448A CN 111492673 A CN111492673 A CN 111492673A
Authority
CN
China
Prior art keywords
hearing
processing
information
point
matching application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880081448.7A
Other languages
Chinese (zh)
Other versions
CN111492673B (en
Inventor
S.阿肖夫
B.阿思马纳坦
S.K.鲁德拉瓦尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sivantos Pte Ltd
Original Assignee
Sivantos Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sivantos Pte Ltd filed Critical Sivantos Pte Ltd
Publication of CN111492673A publication Critical patent/CN111492673A/en
Application granted granted Critical
Publication of CN111492673B publication Critical patent/CN111492673B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/70Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/55Communication between hearing aids and external devices via a network for data exchange

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention proposes a method for transmitting a processing state (1) in a hearing-matched application (2) of a hearing device (4), wherein biological and/or hearing measurement data (20) of a user (14) of the hearing device (4) are transmitted as input values for the hearing-matched application (2) to a central memory (24) and stored there in a typed manner into a plurality of processing groups, wherein, at a first point in time (t1), in a first user area (6), first information about the identity of the user (14) is encoded together with second information about at least one first processing pattern (50), wherein the first processing pattern (50) corresponds to the processing of data of the first processing group by the hearing-matched application (2) which is active at a given point in time, wherein, at a second point in time (t2), decoding first and second information encoded at a first point in time (t1), wherein at least the biological and/or hearing measurement data (20) of the users (14) in the first processing group are loaded from the central memory (24) into the hearing matching application (2), and wherein a processing state (1) of the hearing matching application (2) with respect to a hearing device (4) present at the first point in time (t1) is established by providing the biological and/or hearing measurement data (20) of the users (14) in the first processing group at least in a first processing mode (50) depending on the decoded first and second information.

Description

Method for transmitting processing states in a hearing matching application for a hearing device
Technical Field
The invention relates to a method for transmitting a processing state in a hearing-matched application for a hearing device, wherein biological and/or hearing measurement data of a user of the hearing device are stored as input values of the hearing-matched application in a plurality of processing groups in a manner typed into one, wherein a first processing mode corresponds to the processing of the data of the first processing group by the hearing-matched application active at a given point in time, and wherein the processing state of the hearing-matched application with respect to the hearing device existing at a point in time is established by providing the biological and/or hearing measurement data of the user in the first processing group in the first processing mode.
Background
Hearing devices are commonly used to compensate or correct hearing disorders or in general hearing loss. Although hearing losses of different manifestations can be integrated into the individual medical diagnostic groups on the basis of their similarity to one another, hearing loss is an individual phenomenon, and therefore a hearing device for correcting or compensating hearing loss must be matched to the individual manifestation of the hearing loss that is present at the user of the hearing device. Furthermore, since the hearing of the user changes over time, for example when the hearing in a particular frequency band is further reduced within a particular time period, it may be necessary here to re-match the hearing device, i.e. in particular to re-match the settings of the hearing device for signal processing. This matching is often done by means of a hearing matching application, which is usually provided by or on behalf of the manufacturer of the hearing device, and by means of which various parameters of the signal processing of the hearing device can be accessed, in particular determined. In this case, in particular, hearing measurement data are provided which provide information about the hearing loss of a specific individual of the user, so that parameters of the hearing device can be adapted in a hearing adaptation application taking these into account.
Ideally, the hearing devices are matched by a correspondingly trained audiologist or hearing device acoustician by means of a hearing matching application. However, especially in developing countries, the ratio between users of hearing devices and hearing device acousticians or audiologists is so high that users of hearing devices there often visit medical stations or the like, whose personnel, although having medical technical knowledge, are not trained in specialized hearing. Thus, the medical technicians in such medical stations in most cases perform the matching to the extent that their own knowledge level correspondingly allows. Even for users of hearing devices, which often bring reasonably good results in terms of basic usability of the hearing device, it often occurs that medical technicians rely on information or assistance from trained audiologists.
However, no intervention by the audiologist for the matching by said medical technician without specialized hearing training is currently provided, and therefore such an intervention is also reasonably not feasible. A simple telephone inquiry from an audiologist by a medical technician is failed in that, as a rule, no data of the user of the hearing device is available at the audiologist, and the audiologist is completely unaware of the exact processing state of the hearing-matched application from a distance. Merely for efficiency reasons, it is not a feasible solution to interrupt the matching of a hearing device of a specific user until such time as the audiologist may visit the corresponding medical station for support.
In some regions, the matching is also performed by the audiologist who is trained himself, who, for example, always visits a specially configured department in a hospital or medical station on a predetermined day of work. However, it may occur that the specific matching process for the user is not yet completely completed when the audiologist has to leave the special department, in particular for his next appointment at another location. Currently, this results in delays and inefficiencies.
Disclosure of Invention
The object of the invention is therefore to specify a method by means of which the processing state in a hearing-matched application for a hearing device can be transmitted to another location in conjunction with the data of the user of the hearing device used here, and/or can be retained in order to continue the matching at a later point in time in the same processing state.
According to the invention, the above-mentioned technical problem is solved by a method for transmitting processing states in a hearing matching application for a hearing device, wherein data of a user of the hearing device, in particular biometric and/or hearing measurement data and/or data set up with respect to the hearing device in the middle of the matching, are transmitted as input values for the hearing matching application to a central memory and there stored in a typed manner (typifiziert) into a plurality of processing groups, wherein, at a first point in time, in a first user area, first information about the identity of the user is encoded together with second information about at least one first processing mode, wherein the first processing mode corresponds to the processing of the data of the first processing group by the hearing matching application active at a given point in time, wherein, at a second point in time, decoding the first and second information encoded at the first point in time, wherein data of the users in at least the first processing group is loaded from the central memory into the hearing matching application, and wherein the processing state of the hearing matching application with respect to the hearing device present at the first point in time is established by providing the biological and/or hearing measurement data of the users in the first processing group in the first processing mode based on the decoded first and second information.
Hearing matching application for a hearing device, in particular comprising a computer-supported application configured to match the hearing device to a hearing requirement of a user, in particular known from hearing measurement data, in terms of signal processing settings of the hearing device, and in particular in terms of frequency-dependent amplification and/or dynamic compression of an input signal to the hearing device, according to suitable hearing measurement data of the user of the hearing device.
Here, the matching application processes data which, in addition to the identity of the hearing device user, may also comprise biometric data of the hearing device user, hearing measurement data of the hearing device user and data of the current settings of the hearing device user. In this way, the matching application detects and processes data that is backlogged during the course of hearing device matching, which may last several months. The process ends with the final settings for the hearing device of the hearing device user.
In this case, the audiometric data include, in particular, the values of a tonotogram (Tonaudiogramm), i.e. the values of the hearing of the respective test signal relative to a given standard value, resolved in terms of frequency bands, the values of spatial hearing of a binaural hearing test signal, the values of a frequency-dependent discomfort threshold, the data relating to speech intelligibility, or the data of a "Real-Ear Unaided Gain" measurement, which are transmitted to a central memory and stored there, in particular as input values for a hearing matching application. As the biological data, in particular, age and sex of the user and organic medical diagnosis, such as organic damage of the tympanic membrane, are included, but medical diagnosis not related to hearing may also be included.
In this case, the hearing matching application can be designed, on the one hand, to tune a separate computer unit to the hearing matching application, and in particular to develop the separate computer unit specifically for the hearing matching application, so that the computer unit is not provided for further use, or, on the other hand, the hearing matching application can also be used on a general-purpose computer unit on which further applications, which are in principle separate from the hearing matching application, can also be run.
In particular, the central memory comprises a memory which stores biological and/or hearing measurement data of a plurality of hearing device users and for this purpose, in particular at least also for the respective transmission and storage processes, can be connected to a plurality of computer units on which the hearing matching application is respectively implemented. In particular, the central memory is spatially separated from the computer unit executing the hearing-matched application. In particular, the central storage is a cloud-based storage, i.e. a storage in a computing center, which is dispersedly accessible to users, preferably with corresponding privileges, via an internet connection. In this case, the biological and/or hearing measurement data are typed (Typifizierung) into a plurality of processing groups, in particular as a function of the sequence of the biological and/or hearing measurement data in the complete process of the provision of the hearing device, in which the biological and/or hearing measurement data are recorded for the first time.
Thus, for providing a hearing device to a future user, the identity of the future user may be initially detected, wherein also the biometric data of the future user, such as the date of birth or the gender, may be recorded, so that the identity and the biometric data may be obtained in a manner adapted to the type of data structure. After the recording of the user, the hearing loss can be measured, whereby a tone audiogram is generated, the data of which can likewise be obtained and in particular stored in a corresponding type of format and can be transmitted to a central memory at a given point in time.
As an alternative to the recording of the tonal audiogram with a measuring device specifically manufactured and calibrated for this purpose, i.e. an audiometer, the hearing loss can also be measured with a hearing device, which, however, is usually different from the final hearing device for the user. The results are referred to as "in situ audiogram (Insitu-audiogram)", i.e., as an audiogram performed "in situ". After recording the hearing measurement data of the future user, the hearing device acoustician may select the appropriate hearing device and attach a number of sound couplings to the hearing device that ensure that the sound outlet is maintained in the ear canal of the user. The type name of the selected hearing device and the attached coupling may also be recorded in a type that matches the data structure.
After the start of the matching process of the hearing device for the hearing loss of an individual, there are a number of parameter values set by the hearing device, which may likewise be divided into processing groups. For this reason, it is preferable to use a division into processing groups, for example, into the following groups, corresponding to terms which are widely used and familiar to general audiologists: increase and compression settings, frequency compression settings, directionality settings of the microphone, noise suppression settings, tone optimization settings based on classification, and sound therapy settings used, for example, in the case of tinnitus.
In this case, the typing is preferably carried out in a hearing-matched application before transmission to the central memory.
In particular, the first user zone comprises a computer unit executing a hearing matching application and a spatial environment in close proximity thereto; in the latter case, the first information and the second information may also be encoded, in particular by taking a picture of the screen of the computer unit.
In particular, there is a correspondence between the first processing group and the first processing mode in that, at a given point in time, the hearing matching application is running such that the biological and/or hearing measurement data of the first processing group can be input, changed or, if necessary, directly used for further processing, in particular for corresponding settings of the hearing device, more precisely preferably not in a background process. In particular, it is possible here to precisely associate the individual processing groups of the biological and/or hearing measurement data of the user with different processing modes of the hearing matching application, in which specific matching of one or more functions and/or signal processing settings of the hearing device takes place in accordance with the corresponding biological and/or hearing measurement data of the relevant processing group.
In particular, encoding the first and second information includes converting the corresponding information into a correspondingly standardized format that, when decoded, makes the first and second information fully accessible again. In particular, compression and/or encryption may be performed during encoding. The loading of the biological and/or hearing measurement data of the users in at least the first processing group from the central memory into the hearing matching application can be done on the one hand after the decoding of the first and second information according to the association of the first processing mode with the first processing group, wherein the computer unit implementing the hearing matching application at the time and/or after the decoding can request the mentioned data of the first processing group from the central memory in a targeted manner, in particular due to the knowledge of the first processing mode. On the other hand, in advance, particularly in the case where there is no knowledge of the processing mode that is active at the first time point, the data of the plurality of processing groups may be downloaded globally, and when the first information and the second information are encoded, the first processing mode may be associated correspondingly.
By storing the biometric and/or audiometric data of the user in the central memory, it is made possible to use said data also at a later point in time, in particular also at a different location than the location where the storing process was carried out. In particular, during the encoding of the first and second information, the corresponding storage in the central memory may also take place, for example by combining commands for encoding the information with corresponding storage commands.
By typing the stored data into the respective processing groups, it is possible for the hearing-matched application to process and process the stored data in the corresponding processing mode, which, with knowledge of the corresponding processing mode at a first point in time, is sufficient to provide the data of the corresponding processing group for further processing at another location and/or at another point in time. This can now be achieved by associating information about the identity of the user whose hearing device is being matched in a specific matching process (i.e. first information) with information about the processing mode being active at a given point in time (i.e. first point in time) (i.e. second information), wherein encoding into a common information packet additionally prevents a possible data loss of one of the two information.
Thereby, after decoding, correct data of the relevant user of the hearing device can be accessed on the basis of the first information, and on the one hand, a correct processing set can be selected within these data on the basis of the second information, and on the other hand, the first processing mode of the hearing matching application present when encoding the first and second information at the first point in time can also be restored by using correctly provided biometric and/or hearing measurement data of the relevant first processing set.
Preferably, at the second point in time or after the second point in time, in the first user area, a processing state of the hearing matching application with respect to the hearing device, which is present at the first point in time, is established. This includes, in particular, executing a hearing matching application on a first computer unit at a first point in time and detecting a corresponding first processing state by the computer unit or by means of the computer unit, for example by taking a picture of the screen of the computer unit, and restoring the first processing state on the same computer unit at a second point in time or after the second point in time. This enables the hearing devices of different users to be matched between a first point in time and a second point in time by means of a hearing matching application on the computer unit.
In an advantageous manner, at the second point in time, or after the second point in time, in the second user area, the processing state of the hearing-matched application with respect to the hearing device, which was present at the first point in time, is established. This includes, in particular, that the computer unit executing the hearing-matched application at the first point in time is different from the computer unit restoring the processing state of the hearing-matched application present at the first point in time at or after the second point in time, and is, in particular, spatially separated from one another. This enables the corresponding expert, for example a hearing acoustic expert or a hearing device acoustic expert, to access the processing state of the hearing matching application at another location during the matching process for a specific user, so that the corresponding expert can assist the matching process without a long waiting time.
In this case, the first information and the second information are preferably transmitted from the first user area to the second user area. In particular, the first information and the second information are coded in such a way that only one single data packet has to be transmitted, which can be considered as continuous within the scope of the transmission protocol.
Advantageously, for encoding the second information, a display of a graphical user interface of the hearing matching application is optically detected on a computer unit executing the hearing matching application, wherein at least a first processing mode of the hearing matching application is identified from the optically detected display of the graphical user interface. By performing the optical detection and correspondingly identifying the at least first processing mode, the transmission of the processing state of the hearing matching application present at the first point in time can be decoupled from the computer unit executing the hearing matching application, in particular from the connection between the computer unit and the central memory. In this way, the processing state can also be transmitted when the connection between the computer unit executing the hearing matching application and the central memory does not have the required stability at the point in time when the transmission is desired. Furthermore, the complexity of the hearing matching application can be reduced by the recognition of at least the first processing mode by means of optical detection and the accompanying decoupling of the transmission of the already mentioned processing states from the computer unit.
Preferably, the positions of the plurality of elements of the application window of the graphical user interface are detected in order to identify at least the first processing mode of the hearing matching application. The elements whose position is to be detected here comprise in particular the corresponding boundary edge and/or the identification of the relevant application window, as well as a cursor or label which protrudes over the boundary edge and/or a caption or name in the frame or cursor of the application window. Based on the common information structure of hearing-matched applications, these elements often have a standardized display, so that at predictable locations, distinctive features between application windows associated with 2 correspondingly different processing modes appear, so that optical detection can be focused on the location.
Here, it proves to be further advantageous to perform the optical detection by means of a mobile phone or a tablet PC. In particular, a camera installed in a mobile telephone or tablet PC is used, wherein the optical detection can be carried out by first taking a picture of the graphical user interface with the camera and subsequently, from this picture, encoding the second information and preferably also the first information via corresponding pattern recognition. As an alternative to this, a separate mobile application can also be installed on the mobile phone or tablet PC, which mobile application has direct access to the camera and which mobile application is configured exclusively for recognizing the second information and in particular also for recognizing the first information and for carrying out the corresponding encoding.
Preferably, the first information and the second information are encrypted when they are encoded. In particular, the first information and/or the second information can be encrypted in such a way that, although decryption is possible without a large overhead in the sense of encryption, the encoded first and second information are transmitted as a basic security level, not as plain text. This can be given, for example, when encoding with XMPP messages and/or with QR codes.
Furthermore, the invention proposes a hearing matching application configured for transmitting its processing state by the method described above. Here, in particular, the hearing matching application has a corresponding program interface for transmitting biological and/or hearing measurement data of the user of the hearing device being matched to the central memory, and here in particular, the hearing matching application is configured to restore a specific processing state depending on the first and second information, in particular for the relevant processing mode. Preferably, the hearing matching application is further configured for decoding the encoded first information and the second information. The advantages given for the method and for its further development can be transferred analogously to a hearing matching application.
Drawings
Hereinafter, embodiments of the present invention will be described in more detail with reference to the accompanying drawings. Here:
figure 1 accordingly schematically shows in a block diagram a method of transferring a processing state of a hearing matching application from a first user area to a second user area,
figure 2 accordingly schematically shows in a block diagram an alternative embodiment of the method according to figure 1,
fig. 3 accordingly schematically shows a graphical user interface of a hearing matching application, an
Fig. 4 accordingly shows schematically in a block diagram the storage of hearing measurement data in a central memory and the re-provision of these data by means of a hearing matching application.
In all the drawings, the same reference numerals are given to portions and parameters corresponding to each other, respectively.
Detailed Description
A method of transferring a processing state 1 of a hearing matching application 2 of a hearing device 4 from a first user area 6 to a second user area 8 is schematically illustrated in a block diagram in fig. 1. In the first user area 6, on a first computer unit 10 with a first screen 12, a hearing matching application 2 is executed in order to match here the hearing devices 4 of a specific user 14. The person 16 who matches the hearing device 4 in the first user area is here given by a medical technician who has not been trained in hearing. Now, when fitting the hearing instrument 4, the hearing matching application 2 can cause the hearing instrument 4 to emit a series of test tones via a wireless or wired signal connection 18 between the first computer unit 10 and the hearing instrument 4, wherein the user 14 is registered in the hearing matching application 2 for the hearing of said test tones. Under a corresponding storage command, the person 16 at the first computer unit 10 transmits the hearing measurement values 20 thus determined from the first computer unit 10 via the signal connection 22 to the central memory 24 and stores them there. The central memory 24 is here represented by the cloud. Now, on the basis of the determined hearing measurement values 20, the person 16 matches the settings for the signal processing in the hearing instrument 4 to the hearing requirements of the user 14 by means of corresponding inputs via the first computer unit 10 and the conversion of these inputs by the hearing matching application 2 via the signal connection 18.
In case the person 16 matching the hearing instrument 4 needs support by a trained audiologist at this time, for example when matching specific parameters of the signal processing settings, the person 16 matching can take a picture of the first screen 12 of the first computer unit 10 by means of the mobile phone 26. Now, in the mobile phone 26, the processing state 1 on the first computer unit present at the first point in time t1 corresponding to the point in time of the photograph is recognized by means of optical detection by means of a corresponding additional application or by means of an extension of the hearing matching application 2 by means of a graphical user interface displayed on the first screen 12.
Furthermore, information about the identity of the user 14 is also transmitted to the mobile phone 26 during the optical detection of the processing state 1 of the hearing matching application 2 by the mobile phone 26. This can be done by displaying corresponding information about the identity of the user 14 on the first screen 12, whereby the corresponding information about the identity of the user 14 is detected together in the mobile telephone 26 directly by means of a photograph, wherein this can be displayed in the form of a patient code if necessary. On the other hand, an additional application on the mobile phone 26, which is configured for recognizing the processing state of the hearing matching application 2 from the picture, may also make a corresponding request to the first computer unit 10, so that the first computer unit 10 sends corresponding information about the user 14 as a reply, e.g. via bluetooth.
The mobile phone 26 of the person 16 matching the hearing device 4 encodes the determined information about the processing state 1 of the hearing matching application 2 and about the identity of the user 14 and transmits to the mobile phone 28 of the audiologist 30. The encoding can take place in the form of a QR code 32, for example. The audiologist 30 is located in the second user area 8, the second user area 8 is spatially separated from the first user area 6, and the second user area 8 has a second computer unit 34. Now, the hearing matching application 2 is also executed on the second computer unit 34. Here, the second computer unit 34 is configured to trigger the QR code 32 on the mobile phone 28 of the audiologist 30, for example by a corresponding reading device connected to the second computer unit 34, or also by a corresponding wireless signal transmission between the mobile phone 28 and the second computer unit 34, for example via bluetooth. While the second point in time t2 is defined by decoding the information contained in the QR code 32.
The hearing measurement values 20 processed at the first point in time t1 can now be accurately downloaded from the central memory 24, in a targeted manner via the corresponding signal connection 36 between the central memory 24 and the second computer unit 34, depending on the information contained in the QR code 32 about the processing state 1 of the hearing-matched application in the first user area 6 and about the identity of the user 14. Furthermore, in this case, the processing mode of the hearing matched application 2, which is exactly required for the specific processing of the hearing measurement values 20, can be restored on the second computer unit 34, so that in particular the graphical user interface of the hearing matched application 2 on the second screen 38 of the second computer unit 34 can be reproduced on the first screen 12 of the first computer unit 10 at the point in time when the picture was taken by means of the mobile telephone.
Fig. 2 shows a block diagram schematically illustrating an alternative embodiment of the method according to fig. 1. Here, in the first user area 6, the audiologist 30 executes the processing of the hearing matching application 2 on the first computer unit 10. This processing may be present, for example, when the hearing measurement 20 is detected by means of the hearing device 4. Now, if the audiologist 30 has not completely finished the process or the matching of the hearing device 4, however the audiologist 30 has to leave the first user area 6, for example in order to also match the hearing device elsewhere now, the audiologist 30 can, by means of his mobile phone 28, take a picture of the process state of the hearing matching application 2 given in the graphical user interface on the first screen 12, whereby information about the process state 1 as well as about the identity of the user 14 is encoded with the QR code 32, as in the case shown by means of fig. 1. Now, the QR code is no longer transmitted to another mobile phone. In contrast, the second user area 8 consists in the audiologist 30 proceeding with the processing himself on the way, wherein the second computer unit is present in the notebook computer. Thus, the audiologist 30 may, for example, match the respective settings of the hearing instrument 4 based on the hearing measurement values 20 correspondingly downloaded from the central memory 24 to the second computer unit 34. These matching settings can now be stored in the central memory 24, so that the person 16 who continues to match the hearing device 4 in the absence of the audiologist 30, if necessary, only needs to further transfer the thus processed settings to the hearing device 4.
In fig. 3, a graphical user interface 40 of the hearing matched application 2 according to fig. 1 and 2 is schematically shown, the graphical user interface 40 having a plurality of application windows 42. The application windows 42 are distributed over the entire image region 44 in a fixedly predefined geometric pattern, wherein in each subregion 46 a plurality of application windows 42 are respectively arranged "one after the other", so that for a corresponding selection a cursor (Reiter)48 is respectively assigned to the plurality of application windows 42. Now, with the aid of these cursors 48, upon optical detection of the graphical user interface 40, it can be recognized which application window 42 is located just in front of the graphical user interface 40, i.e. in which application window 42 in particular, the relevant data and/or values are processed. The first processing mode 50 is thus given by the correspondence of the cursor 48 to the active processing of the value in the relevant application window 42, and the first processing mode 50 is also restored on the second computer unit in the transmission of the processing state to the second user area as shown by means of fig. 1 and 2.
The central memory 24 is schematically illustrated in a block diagram in fig. 4, the first computer unit 10 being connected to the central memory 24 via a signal connection 22 and a first interface 52, and the second computer unit 34 being connected to the central memory 24 via a second interface 54 and a signal connection 36. The audiologist 30 executes a hearing matching application 2 on the first computer unit 10, the hearing matching application 2 being configured in particular for detecting hearing measurement data 20 of a hearing device user, not shown in detail, and for matching the hearing device 4. The person 16 executes a further hearing matching application 2 'on the second computer unit 34, which further hearing matching application 2', although likewise configured as an application for the hearing matching application 2 described above, differs from the hearing matching application 2 in its information structure and may be incompatible with the hearing matching application 2 in its several individual functions. The person 16 can likewise be an audiologist or can be provided by a medical technician without specialized hearing training.
Now, on the one hand, the hearing measurement data 20 of the user of the hearing device 4 detected by means of the hearing matching application 2 can be transmitted to the central memory 24 via the first interface 52, but other settings on the signal processing 55, such as amplification and compression settings, frequency compression settings, microphone directivity settings, noise suppression settings or settings for classification-based timbre optimization, for example settings for sound therapy 56 used in the case of tinnitus, smoothing function 58 settings, details on the measurement conditions 59 of the hearing measurement or details on the timbre envelope 60, which result from the matching of the hearing device 4, can also be transmitted to the central memory 24 via the first interface 52. There, the mentioned data are collated in standardized form on the one hand according to the essential data 62 and optionally 64 and are preferably stored according to a processing group intended for the processing mode 50 according to fig. 3, to which are also added patient information 65 about the identity of the user of the hearing device 4 and about the type and model of the hearing device, as well as additional identification information 66 (for example name, address and, where applicable, organization) about the processor of the data, i.e. here about the audiologist.
That is, thus, data 62 about what the user has to detect, in particular optional data 64, which may relate to, in particular, special functions of the signal processing settings of the hearing device 4, and identification information 66 of the audiologist 30, are present in standardized form in the central memory 24. Now, a report 72 of the matching with respect to the hearing device 4 and the data with respect to the user of the hearing device 4 is created by a report generator 70 in the central memory 24 by means of a format template 68 stored in the central memory 24 and output via the first interface 52 to the first computer unit 10 in a format readable and processable by the hearing matching application 2, where the audiologist 30 receives the report 72 and can take control.
Now, if for example the user of the hearing device 4 feels uncomfortable with settings made by the audiologist 30 for the hearing device at a later point in time, or has to re-match these settings due to a change in the user's hearing, but the user is spatially separated from "his" audiologist 30, the user may visit the person 16 to re-match. The person 16 can firstly download the report 72 on the one hand via the second interface 54 onto the second computer unit 34, wherein the second interface 54 ensures the compatibility of the data in the report 72 with the hearing-matched application 2 'used on the second computer unit, for example by matching, in terms of display, standardized format components of the report 72 generated in the report generator 70 with the requirements of the hearing-matched application 2'. For this purpose, the first interface 52 and the second interface 54 are preferably coordinated with the associated hearing matched application 2 or 2', respectively.
However, the person 16 may also detect other data, for example data about the program control 74 in the hearing instrument 4, data about the frequency shaping 76, or possibly also other data relating to a reevaluation of data already stored by the audiologist 30 in the central memory 24.
These data are now transmitted via the signal connection 36 to the second interface 54 and via the second interface 54 to the central memory 24 and are stored there according to the category "necessary"/"optional" and preferably according to the processing group intended to be in accordance with the processing mode 50 of fig. 3. Finally, the report generator 70 may create an updated report 78 with the new and if necessary already existing data or a part thereof together with the new identification information 66 'according to the format template 68, output the updated report 78 to the second computer unit 34 via the second interface 54 in a format readable and processable by the hearing matching application 2'. The audiologist 30 may also download the updated report 78 to the first computer unit 10 via the first interface 52 in a format readable and processable by the hearing matching application 2, if desired.
While the invention has been illustrated and described in detail by the preferred embodiments, it is not limited by the embodiments. From which a person skilled in the art can derive other variants without departing from the scope of protection of the invention.
List of reference numerals
1 processing state
2 Hearing matching application
4 hearing device
6 first user zone
8 second user area
10 first computer unit
12 first screen
14 users
16 persons
18 signal connection
20 audiometric data
22 signal connection
24 central storage
26 mobile telephone
28 Mobile telephone
30 audiologist
32 QR code
34 second computer unit
36 signal connection
38 second screen
40 graphic user interface
42 application windows
44 image area
46 sub-region
48 vernier
50 processing modes
52 first interface
54 second interface
55 feedback suppression
56 Sound treatment
58 smoothing function
59 measurement conditions
60 timbre envelope
62 necessary data
64 optional data
65 patient information
66 identification information
68 form template
70 report generator
72 report
74 program control
76 frequency shaping
78 updated reports
t1 first time point
t2 second time point

Claims (11)

1. A method for transmitting a processing state (1) in a hearing matching application (2, 2') for a hearing device (4),
wherein data (20) of a user (14) of a hearing device (4) are transmitted as input values for the hearing matching application (2) to a central memory (24) and stored there in a typed manner in a plurality of processing groups,
wherein, at a first point in time (t1), in a first user area (6), first information about the identity of a user (14) is encoded together with second information about at least one first processing pattern (50), wherein the first processing pattern (50) corresponds to the processing of data of a first processing group by the hearing matching application (2, 2') active at a given point in time,
wherein, at a second point in time (t2), the first and second information encoded at the first point in time (t1) are decoded,
wherein biological and/or hearing measurement data (20) of users (14) in at least the first processing group are loaded from the central memory (24) into the hearing matching application (2, 2'), and
wherein the processing state (1) of the hearing matching application (2, 2') with respect to the hearing device (4) present at a first point in time (t1) is established by providing data (20) of users (14) in the first processing group at least in a first processing mode (50) in dependence of the decoded first and second information.
2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein, in the first user area (6), at a second point in time (t2), or after a second point in time (t2), a processing state (1) of the hearing matching application (2, 2') with respect to the hearing device (4) is established, which is present at the first point in time (t 1).
3. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein, in the second user area (8), at the second point in time (t2), or after the second point in time (t2), a processing state (1) of the hearing matching application (2, 2') with respect to the hearing device (4) is established, which is present at the first point in time (t 1).
4. The method of claim 3, wherein the first and second light sources are selected from the group consisting of,
wherein the encoded first information and the encoded second information are transmitted from the first user area (6) to the second user area (8).
5. The method according to any one of the preceding claims,
wherein, for encoding the second information, a display of a graphical user interface (40) of the hearing matching application (2) is optically detected on a computer unit (10) executing the hearing matching application (2, 2'), and
wherein at least a first processing mode (50) of the hearing matching application (40) is identified from the optically detected display of the graphical user interface (40).
6. The method of claim 5, wherein the first and second light sources are selected from the group consisting of,
wherein, in order to identify at least a first processing mode (50) of the hearing matching application (2, 2'), positions of a plurality of elements of an application window (42) of the graphical user interface (40) are detected.
7. The method of claim 5 or claim 6,
wherein the optical detection is performed by means of a mobile phone (26) or a tablet PC.
8. The method according to any one of the preceding claims,
wherein the encryption is performed while the first information and the second information are encoded.
9. The method of claim 8, wherein the first and second light sources are selected from the group consisting of,
wherein the first information and the second information are encoded by an XMPP message and/or a QR code (32).
10. The method according to any one of the preceding claims,
wherein, as data (20) of the user (14), biometric and/or hearing measurement data and/or setting data of the hearing device (4), as input values for the hearing matching application (2), are transferred to the central memory (24) and stored there in a typed manner.
11. A hearing matching application (2, 2') configured for transmitting a processing state (1) of the hearing matching application by the method according to any of the preceding claims.
CN201880081448.7A 2018-06-08 2018-06-08 Method for transmitting processing states and hearing matching application system Active CN111492673B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2018/065190 WO2019233602A1 (en) 2018-06-08 2018-06-08 Method for transmitting a processing state in an audiological adaptation application for a hearing device

Publications (2)

Publication Number Publication Date
CN111492673A true CN111492673A (en) 2020-08-04
CN111492673B CN111492673B (en) 2022-02-11

Family

ID=62597498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880081448.7A Active CN111492673B (en) 2018-06-08 2018-06-08 Method for transmitting processing states and hearing matching application system

Country Status (4)

Country Link
EP (1) EP3649792B1 (en)
CN (1) CN111492673B (en)
DK (1) DK3649792T3 (en)
WO (1) WO2019233602A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4017029A1 (en) * 2020-12-16 2022-06-22 Sivantos Pte. Ltd. System, method and computer program for interactively assisting a user in evaluating a hearing loss

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103503364A (en) * 2011-04-01 2014-01-08 英特尔公司 Application usage continuum across platforms
US20140095625A1 (en) * 2012-10-02 2014-04-03 Nextbit Systems Inc. Application state backup and restoration across multiple devices
WO2014094858A1 (en) * 2012-12-20 2014-06-26 Widex A/S Hearing aid and a method for improving speech intelligibility of an audio signal
CN103945315A (en) * 2012-11-23 2014-07-23 奥迪康有限公司 Listening device comprising an interface to signal communication quality and/or wearer load to surroundings
CN106878900A (en) * 2008-12-22 2017-06-20 奥迪康有限公司 The method and hearing aid device system of the estimation operation hearing instrument based on user's present cognitive load
US20170223471A1 (en) * 2011-01-19 2017-08-03 Apple Inc. Remotely updating a hearing aid profile
CN107454536A (en) * 2016-05-30 2017-12-08 西万拓私人有限公司 For the method for the parameter value for automatically determining hearing-aid device
US20180063656A1 (en) * 2010-05-17 2018-03-01 Iii Holdings 4, Llc Devices and methods for collecting acoustic data
CN107786930A (en) * 2016-08-25 2018-03-09 西万拓私人有限公司 Method and apparatus for setting hearing-aid device
CN107911528A (en) * 2017-12-15 2018-04-13 刘方辉 A kind of hearing compensation system based on smart mobile phone and its self-service test method of completing the square
CN108028055A (en) * 2015-10-19 2018-05-11 索尼公司 Information processor, information processing system and program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002288125A (en) * 2001-03-27 2002-10-04 Just Syst Corp System and method for reproducing working state
JP4817814B2 (en) * 2004-11-19 2011-11-16 富士通株式会社 Application state information transfer system
US9223611B2 (en) * 2010-12-28 2015-12-29 Microsoft Technology Licensing, Llc Storing and resuming application runtime state
US20130041790A1 (en) * 2011-08-12 2013-02-14 Sivakumar Murugesan Method and system for transferring an application state
US8990343B2 (en) * 2012-07-30 2015-03-24 Google Inc. Transferring a state of an application from a first computing device to a second computing device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878900A (en) * 2008-12-22 2017-06-20 奥迪康有限公司 The method and hearing aid device system of the estimation operation hearing instrument based on user's present cognitive load
US20180063656A1 (en) * 2010-05-17 2018-03-01 Iii Holdings 4, Llc Devices and methods for collecting acoustic data
US20170223471A1 (en) * 2011-01-19 2017-08-03 Apple Inc. Remotely updating a hearing aid profile
CN103503364A (en) * 2011-04-01 2014-01-08 英特尔公司 Application usage continuum across platforms
US20140095625A1 (en) * 2012-10-02 2014-04-03 Nextbit Systems Inc. Application state backup and restoration across multiple devices
CN103945315A (en) * 2012-11-23 2014-07-23 奥迪康有限公司 Listening device comprising an interface to signal communication quality and/or wearer load to surroundings
WO2014094858A1 (en) * 2012-12-20 2014-06-26 Widex A/S Hearing aid and a method for improving speech intelligibility of an audio signal
CN108028055A (en) * 2015-10-19 2018-05-11 索尼公司 Information processor, information processing system and program
CN107454536A (en) * 2016-05-30 2017-12-08 西万拓私人有限公司 For the method for the parameter value for automatically determining hearing-aid device
CN107786930A (en) * 2016-08-25 2018-03-09 西万拓私人有限公司 Method and apparatus for setting hearing-aid device
CN107911528A (en) * 2017-12-15 2018-04-13 刘方辉 A kind of hearing compensation system based on smart mobile phone and its self-service test method of completing the square

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘春丽: "《移动听力APP技术特点与应用》", 《中国听力语言康复科学杂志》 *

Also Published As

Publication number Publication date
WO2019233602A1 (en) 2019-12-12
CN111492673B (en) 2022-02-11
EP3649792A1 (en) 2020-05-13
EP3649792B1 (en) 2022-03-23
DK3649792T3 (en) 2022-06-20

Similar Documents

Publication Publication Date Title
US9113279B2 (en) Method for adjusting a hearing apparatus via a formal language
JP5909669B2 (en) Hearing aid, hearing aid fitting system, and hearing aid fitting method
AU781256B2 (en) Method and system for on-line hearing examination and correction
CN107454536B (en) Method for automatically determining parameter values of a hearing aid device
Levitt A historical perspective on digital hearing aids: how digital technology has changed modern hearing aids
US10785580B2 (en) Method for adjusting parameters of a hearing system and hearing system
US20190213499A1 (en) Information processing apparatus, artificial intelligence identification method, and program
JP6837603B1 (en) Support systems, support methods and programs
US11425516B1 (en) System and method for personalized fitting of hearing aids
CN111492673B (en) Method for transmitting processing states and hearing matching application system
US20170325033A1 (en) Method for operating a hearing device, hearing device and computer program product
CN110769396B (en) Method, system and terminal equipment for robot to connect network
US10085096B2 (en) Integration of audiogram data into a device
JP7007778B1 (en) Hearing aid adjustment system and hearing aid adjustment method
JP2019128665A (en) Medication support program, device, and method
WO2020217359A1 (en) Fitting assistance device, fitting assistance method, and computer-readable recording medium
US11528569B2 (en) Method for transmitting information for adapting a hearing aid and networked computer infrastructure
JP2021018272A (en) Voice processing system, voice processor, and program
JP6167313B2 (en) hearing aid
CN111028937A (en) Real-time remote auscultation method and system
US20240121560A1 (en) Facilitating hearing device fitting
WO2023028122A1 (en) Hearing instrument fitting systems
US11849286B1 (en) Ear-worn device configured for over-the-counter and prescription use
Shehieb et al. Intelligent Hearing System using Assistive Technology for Hearing-Impaired Patients
US20160364540A1 (en) Patient communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant