CN112351364B - Voice playing method, earphone and storage medium - Google Patents

Voice playing method, earphone and storage medium Download PDF

Info

Publication number
CN112351364B
CN112351364B CN202110001135.XA CN202110001135A CN112351364B CN 112351364 B CN112351364 B CN 112351364B CN 202110001135 A CN202110001135 A CN 202110001135A CN 112351364 B CN112351364 B CN 112351364B
Authority
CN
China
Prior art keywords
voice data
earphone
voice
playing
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110001135.XA
Other languages
Chinese (zh)
Other versions
CN112351364A (en
Inventor
何定
刘治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianan Technology Co ltd
Original Assignee
Shenzhen Qianan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianan Technology Co ltd filed Critical Shenzhen Qianan Technology Co ltd
Priority to CN202110307073.5A priority Critical patent/CN112887871B/en
Priority to CN202110307077.3A priority patent/CN112887872B/en
Priority to CN202110001135.XA priority patent/CN112351364B/en
Publication of CN112351364A publication Critical patent/CN112351364A/en
Application granted granted Critical
Publication of CN112351364B publication Critical patent/CN112351364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application is applicable to the field of audio data processing, and provides a voice playing method, an earphone and a storage medium. The voice playing method is applied to a first earphone and comprises the following steps: acquiring voice data to be played; acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition; and if the first real-time position meets the position condition, playing the voice data. The embodiment of the application can improve the privacy security of voice playing.

Description

Voice playing method, earphone and storage medium
Technical Field
The present application belongs to the technical field of audio data processing, and in particular, to a voice playing method, an earphone, and a storage medium.
Background
An earphone is an audio player that can convert voice data into audible sound waves using a speaker near the ear. The earphone can be used for enabling the wearer to listen to the sound box under the condition that other people are not influenced; can also isolate the sound of the surrounding environment, and is very helpful for the wearer who uses in the noisy environment such as a recording studio, a bar, a journey, sports and the like.
However, at present, the security of a voice playing method of an earphone is low, and the problem of privacy disclosure is easy to occur.
Disclosure of Invention
The embodiment of the application provides a voice playing method, an earphone and a storage medium, and can solve the problems that the voice playing method of the earphone is low in safety and privacy disclosure is easy to occur.
A first aspect of an embodiment of the present application provides a voice playing method, which is applied to a first earphone, and includes:
acquiring voice data to be played;
acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition;
and if the first real-time position meets the position condition, playing the voice data.
A second aspect of the embodiments of the present application provides a voice sending method, which is applied to a second earphone, and includes:
acquiring voice data;
establishing a communication connection with a first earphone, and sending voice data to the first earphone, and playing the voice data by the first earphone according to the voice playing method provided by the first aspect.
A third aspect of the present application provides a voice playing apparatus configured on a first earphone, including:
the device comprises an acquisition unit, a display unit and a playing unit, wherein the acquisition unit is used for acquiring voice data to be played;
the detection unit is used for acquiring a first real-time position of the first earphone and detecting whether the first real-time position meets a preset position condition;
and the playing unit is used for playing the voice data if the first real-time position meets the position condition.
A fourth aspect of the present invention provides a voice transmitting apparatus configured to a second earphone, including:
an acquisition unit configured to acquire voice data;
a sending unit, configured to establish a communication connection with a first headset, and send voice data to the first headset, where the voice data is played by the first headset according to the voice playing method provided in the first aspect.
A fifth aspect of embodiments of the present application provides a headset, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the voice playing method provided in the first aspect and/or the voice playing method provided in the second aspect when executing the computer program.
A sixth aspect of embodiments of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the voice playing method provided in the first aspect and/or the voice playing method provided in the second aspect.
A seventh aspect of embodiments of the present application provides a computer program product, which when run on a headset, causes the headset to implement the steps of the voice playing method provided in the first aspect and/or the voice playing method provided in the second aspect.
In the embodiment of the application, the voice data to be played is acquired; then, acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition; and if the first real-time position meets the position condition, playing the voice data. That is, after receiving the voice data, the first earphone does not directly play the voice data, but determines whether the first real-time position where the first earphone is currently located meets the position condition, and plays the voice data when the first real-time position meets the position condition. For example, when the first earphone is located in the conference room, the voice data is played. Therefore, privacy leakage caused by playing voice data when the first earphone does not meet the position condition can be avoided, and the safety of earphone voice playing is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation process of a voice playing method according to an embodiment of the present application;
fig. 2a is a schematic diagram of an earphone provided in an embodiment of the present application establishing connection through another terminal;
fig. 2b is a schematic diagram of the connection between the earphones according to the embodiment of the present application;
fig. 3 is a schematic flowchart of a specific implementation of step S102 according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a specific implementation of step S301 provided in the embodiment of the present application;
fig. 5 is a schematic flowchart of a specific implementation of processing and playing voice data according to region information according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a third area provided by an embodiment of the present application;
fig. 7 is a schematic diagram of a plurality of earphones for establishing communication according to an embodiment of the present disclosure;
fig. 8 is a schematic flowchart of a first implementation of step S103 provided in the embodiment of the present application;
fig. 9 is a schematic flowchart of a second implementation of step S103 provided in the embodiment of the present application;
fig. 10 is a schematic flow chart illustrating an implementation process of a voice sending method according to an embodiment of the present application;
fig. 11 is a schematic view illustrating an implementation process of sound effect adjustment according to an embodiment of the present application;
fig. 12 is a schematic implementation flow diagram of step S1002 according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a voice playing apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a voice transmission apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an earphone provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
An earphone is an audio player that can convert voice data into audible sound waves using a speaker near the ear. The earphone can be used for enabling the wearer to listen to the sound box under the condition that other people are not influenced; can also isolate the sound of the surrounding environment, and is very helpful for the wearer who uses in the noisy environment such as a recording studio, a bar, a journey, sports and the like.
However, at present, the security of a voice playing method of an earphone is low, and the problem of privacy disclosure is easy to occur.
For example, for some people with poor hearing, the sound of the earphone is generally adjusted to the maximum volume when the people use the earphone, and other people near the earphone can still hear the sound coming from the earphone, which causes privacy leakage.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 shows a schematic implementation flow diagram of a voice playing method provided in an embodiment of the present application, where the method may be applied to a first headset and may be applied to a situation where privacy security of voice playing needs to be improved. The first earphone may be an earphone, a headphone, an in-ear earphone, or a headset.
In some embodiments of the present application, the voice playing method may also be applied to terminals such as a mobile phone, a tablet computer, or a smart watch, and when the terminals play out voice data, the privacy security of voice playing may be improved by the voice playing method provided by the present application.
Specifically, the voice playing method may include the following steps S101 to S103.
Step S101, obtaining voice data to be played.
The voice data refers to data that needs the first earphone to play. The source and the obtaining mode of the voice data can be selected according to actual conditions.
In some embodiments of the present application, the voice data may be voice data pre-stored in a storage module of the first headset, or voice data received by the other terminal after the first headset establishes a communication connection with the other terminal.
The terminal can be a mobile phone, a computer, a smart watch or other terminals, and the first earphone can receive voice data sent by other terminals after completing pairing with other terminals.
In particular, in some embodiments of the present application, the terminal may also be a second earphone different from the first earphone. That is, the first headset may establish a communication connection with the second headset and receive voice data transmitted by the second headset. Generally, as shown in fig. 2a, when a user a wearing a first earphone 21 and a user B wearing a second earphone 22 are communicating, another terminal such as a mobile phone or a computer is generally required as an intermediate repeater. For example, after the mobile phone 24 of the user B is paired with the second earphone 22, the voice data collected by the second earphone 22 is acquired, and then the voice data is forwarded to the mobile phone 23 of the user a through the pre-established communication connection, and the mobile phone 23 of the user a sends the voice data to the first earphone 21 for playing. In the forwarding process, the problem of privacy disclosure is easy to occur.
In some embodiments of the present application, as shown in fig. 2B, the first headset 25 of user a may directly establish a communication connection with the second headset 26 of user B and receive voice data transmitted by the second headset.
At this moment, first earphone and second earphone do not all need to be connected with the terminal, need not forward the speech data that gathers the second earphone to first earphone through the forwarding of terminal promptly, but through the communication connection between first earphone and the second earphone, by the speech data that first earphone direct reception second earphone sent, can avoid the terminal to take place the problem that privacy was revealed at the in-process of retransmitting speech data.
Specifically, the specific establishment mode of the communication connection may be selected according to an actual situation, for example, when the first earphone and the second earphone are both bluetooth earphones, the first earphone and the second earphone may be paired through bluetooth. For another example, the first headset and the second headset are both configured with a network module, and the first headset and the second headset may establish a communication connection based on a wireless network. In other embodiments of the present application, the user may connect the first earphone and the second earphone to the same control terminal in advance, for example, the control terminal may be a mobile phone or a computer, and the control terminal establishes a communication connection for the first earphone and the second earphone.
In other embodiments of the present application, the first earphone and the second earphone may be configured with a gesture sensor, based on which the first earphone and the second earphone may detect whether they are in a shaking state simultaneously with each other. If the first earphone and the second earphone are in a shaking state at the same time, the first earphone and the second earphone can establish communication connection.
Step S102, a first real-time position where the first earphone is located is obtained, and whether the first real-time position meets a preset position condition is detected.
The first real-time position refers to a current position of the first earphone.
In some embodiments of the present application, if the first headset is configured with a Global Positioning System (GPS), the first headset may obtain a first real-time location of the first headset via the GPS. In other embodiments of the present application, the first earphone may also be configured with a camera, and the camera collects an environment image, and determines a first real-time position of the first earphone in the electronic map according to a comparison between the environment image and a pre-stored electronic map.
It should be noted that other manners of obtaining the first real-time position are also applicable to the scheme of the present application, and details thereof are not described herein.
In the embodiment of the application, before playing the voice data, it is required to first detect whether the first real-time position meets a preset position condition, so as to determine whether the current state of the first earphone meets the requirement of privacy protection.
The location condition is whether the location is at a location that satisfies the privacy protection requirement. Specifically, the location condition may refer to whether the first real-time location of the first earphone is located in an area meeting the privacy protection requirement, or may refer to whether the relative location of the first earphone and another terminal or another person meets the privacy protection requirement, for example, whether the distance between the first real-time location and the second real-time location of the other terminal meets the privacy protection requirement.
Step S103, if the first real-time position meets the position condition, the voice data is played.
In the embodiment of the application, if the first real-time position meets the position condition, it indicates that the first real-time position where the first earphone is located meets the requirement of privacy protection, and therefore, the first earphone can play the voice data.
In some embodiments of the present application, if the first real-time location does not satisfy the location condition, it indicates that the first real-time location where the first earphone is located does not satisfy the requirement of privacy protection, and therefore, the first earphone may not play the voice data, or may play the processed voice data after processing the voice data.
For example, when a meeting needs to be held, each participant can use one earphone. In order to prevent the conference content from being leaked, the location condition may be set as whether the first earphone is located in the conference room, that is, when the first real-time location of the first earphone is located in the conference room, the first earphone satisfies the privacy protection condition, which indicates that the first real-time location satisfies the location condition, and at this time, the first earphone may play the voice data.
For another example, when the player participates in the singing game, in order to prevent other players from eavesdropping on the stock track of the player wearing the first headphone, the position condition may be set as whether the distance from the other players is greater than the distance threshold. That is, when the distance between the first real-time position of the first headphone and the position of the other player is greater than the distance threshold, it indicates that the first headphone satisfies the privacy protection condition, which indicates that the first real-time position satisfies the position condition, and the first headphone can play the voice data. Wherein, the distance threshold value can be adjusted according to the actual situation.
In the embodiment of the application, the voice data to be played is acquired; then, acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition; and if the first real-time position meets the position condition, playing the voice data. That is, after receiving the voice data, the first earphone does not directly play the voice data, but determines whether the first real-time position where the first earphone is currently located meets the position condition, and plays the voice data when the first real-time position meets the position condition. For example, when the first earphone is located in the conference room, the voice data is played. Therefore, privacy leakage caused by playing voice data when the first earphone does not meet the position condition can be avoided, and the safety of earphone voice playing is improved.
For some elders, even if the volume of the first earphone is large, since the first real-time position of the first earphone worn by the first earphone meets the position condition, the problem of privacy disclosure is not worried about.
In the embodiment of the application, the position conditions may be set differently based on the requirements of different scenes, and accordingly, the detection mode of whether the first real-time position meets the preset position condition may also be different. The following description will be given with reference to specific examples.
In some embodiments of the present application, as shown in fig. 3, the detecting whether the first real-time location satisfies the preset location condition may include the following steps S301 to S303.
Step S301, a first wearer of a first headset is identified, and a first zone associated with the first wearer is obtained.
Wherein the first wearer refers to a user currently wearing the first headset. The identification mode of the first wearer can be selected according to actual conditions.
In some embodiments of the present application, each first earpiece may be pre-associated with a corresponding wearer. For example, when the employees are in duty, each employee may be equipped with a corresponding headset. At this time, the first wearer of the first earphone can be identified directly from information such as the identifier and the production number of the first earphone.
In other embodiments of the present application, the first earpiece may also perform the identification of the first wearer by different means, such as voiceprint recognition, face recognition, person identifier recognition (e.g., recognizing a job number on a employee's card), and the like.
In an embodiment of the present application, after identifying the first wearer, a first zone associated with the first wearer may be obtained. The first area is an area allowing the first earphone to play voice data. It should be noted that each first wearer may be associated with a first zone, and the first zones associated with different first wearers may be the same or different.
The manner of acquiring the first region may be selected according to actual conditions. For example, in some embodiments of the present application, each piece of personal information of the first wearer is associated with a region in advance, and the first earphone may determine, after identifying the first wearer, a region associated with the first wearer based on a mapping relationship between the personal information of the first wearer and the region, and use the region as the first region. For example, the personal information may be a job number, and the area may be a workstation area corresponding to the job number.
Step S302, detecting whether the first real-time location is located in the first area.
Specifically, in some embodiments of the present application, the first headset may store an electronic map, and based on the electronic map, the first headset may detect whether the first real-time location is located in the first area. In other embodiments of the present application, the first headset may also determine whether the current first real-time location is located within the first area based on scene recognition.
Step S303, if the first real-time location is located in the first area, it is determined that the first real-time location satisfies the location condition.
Based on the above description, each first wearer may be associated with a first area, and when the first real-time location of the first earphone is located in the first area associated with the first wearer, it is described that the current location of the first earphone satisfies the requirement of privacy protection, so that it can be confirmed that the first real-time location satisfies the location condition, and the first earphone plays the voice data.
For example, when a meeting needs to be held, each participant generally has a seat in the meeting room. If each participant wears a first headset, i.e., each participant is a first wearer, the first region of each first headset may be a region within a predetermined distance from the seat associated with its first wearer. That is, when the first real-time position of the first earphone of the first wearer is located on or near the seat of the first wearer, the first real-time position of the first earphone satisfies the position condition, and therefore, the first earphone plays the voice data, so that the first wearer can listen to the voice data through the first earphone on or near the seat.
For another example, when a meeting needs to be held and the meeting room is occupied, each participant is typically required to participate in the meeting at their own location. Thus, the first region of the first earpiece may be the workstation region associated with its first wearer. That is to say, when the first real-time position of first earphone of first wearer is located first wearer's station area, the first real-time position of this first earphone satisfies the position condition, and consequently, this first earphone can play voice data for first wearer can hear voice data through first earphone in own station area.
In an embodiment of the application, based on the identification of the first wearer of the first headset, a first region associated with the first wearer may be obtained, and then it is detected whether the first real-time location is located within the first region; and if the first real-time position is located in the first area, confirming that the first real-time position meets the position condition. That is, the first earphone must be in the area associated with the first wearer to play the voice data, and the different first earphones have corresponding areas meeting the privacy protection requirement, for example, the first wearer can listen to the voice data in the work station area of the first wearer, but cannot listen to the voice data in the areas of other persons, so that the privacy security of earphone voice playing can be improved.
The first area associated with the first wearer in step S301 may not be a fixed area, due to the requirements of practical applications. Therefore, as shown in fig. 4, in some embodiments of the present application, the above-mentioned acquiring the first area associated with the first wearer may include the following steps S401 to S404.
Step S401, a voice instruction associated with the first wearer is acquired.
The voice instruction is an instruction for instructing the first wearer to execute a certain operation. The voice instruction may be obtained in different ways.
Specifically, in some embodiments of the present application, the voice command may be obtained by recognizing the voice data. For example, when the voice data is voice data sent by the second headset worn by the conference host and includes an instruction sent by the conference host to the first wearer, for example, "please help me go to the office to fetch a conference file," the first headset may recognize the voice instruction from the voice data.
In some embodiments of the present application, the voice command may also be obtained by recognizing voice data collected by a microphone of the first earphone. For example, the first wearer may indicate and say "i need to go to the office to take a meeting file" during the meeting, and the first headset may recognize the voice command from the voice data collected by the microphone.
Step S402, according to the voice command, identifying the target position pointed by the voice command.
The target position refers to a position to which the first wearer needs to go in the process of executing the voice instruction. In some embodiments of the present application, the identification manner of the target position may be selected according to actual situations.
In some embodiments of the present application, the first earphone may identify a keyword related to the target location therein, and determine the target location pointed to by the voice command according to the keyword. For example, if the voice command includes the name of the target location, the target location may be determined based on the name of the target location.
If the voice command includes an operation that the first wearer needs to perform, the first earphone can also recognize a keyword related to the operation content, and determine a target position pointed by the voice command according to the keyword. For example, if the voice command is "get a financial contract", the keyword related to the content can be identified as "financial contract" according to the voice command, and accordingly, the target location associated with the command is a financial office.
Step S403, determining an audible path of the first earphone according to the first real-time location and the target location.
Wherein the hearable path is a path between the first real-time location and the target location. Specifically, based on the electronic map stored in the first earphone, a path from the first real-time location to the target location may be determined, and the path is an audible path.
Step S404, determining a first area according to the listening available path.
In order to ensure the listening of the first wearer, in some embodiments of the present application, the listening of the first wearer should be ensured during the operation performed by the first wearer when the first wearer goes to the target position. Accordingly, after determining the hearable path, the first area may be determined according to the hearable path, for example, an area within a preset distance range of the hearable path may be determined as the first area.
In the embodiment of the application, a voice instruction associated with a first wearer is obtained, a target position pointed by the voice instruction is identified according to the voice instruction, then, an audible path of a first earphone is determined according to a first real-time position and the target position, and a first area is determined according to the audible path. That is to say, when first person of wearing wears first earphone and goes to the operation that target position execution voice command corresponds, voice data still can be broadcast to first earphone, consequently, this application can be according to actual demand, voice command promptly, adjusts first region, when guaranteeing privacy security, guarantees the practicality of first earphone.
In practice, the audible path may pass through a plurality of second pre-planned areas, for example, the audible path may pass through a foreground, a general manager's office and/or a tea room, etc. In order to further improve privacy security, in some embodiments of the present application, the first earphone may process the voice data before playing the voice data.
Specifically, as shown in fig. 5, the playing of the voice data may include the following steps S501 to S503.
Step S501, determining a third area in the first area according to the second area planned in advance.
The second areas are pre-planned areas, which can be pre-planned according to different information such as functions, people numbers, etc., for example, an office building can be pre-planned into a plurality of second areas, including a foreground, an office area, a tea room, a conference room, etc.
In some embodiments of the present application, the first earphone may acquire the second area planned in advance through an electronic map or a design drawing of a current scene. The first earphone can determine a third area in the first area according to the second area, wherein the third area is the overlapping area of the first area and the second area.
For convenience of illustration, fig. 6 provides a schematic view of a scene, and in the current scene, the pre-planned second area includes a foreground 62, a general manager office 63, and a toilet 64. After the first headphone determines the receivable route 60, an area within a preset distance range of the receivable route 60 is determined as a first area 61, an overlapping area of the first area 61 and the front desk 62 is determined as a third area 65, an overlapping area of the first area 61 and the general manager's office 63 is determined as a third area 66, and an overlapping area of the first area 61 and the toilet 64 is determined as a third area 67. When the first wearer walks along the audible path 60, conditions may arise such as a need to make way for others, at which point the first wearer may enter the third zone, i.e., the first real-time position of the first earpiece is located within the third zone.
Step S502, if the first real-time location is located in the third area, acquiring area information of the third area.
The area information refers to characteristic information of the third area, and may be, for example, a privacy level, an area opening degree, a noise degree, or the like. The obtaining manner of the area information may be selected according to actual situations, for example, in some embodiments of the present application, since the area information of the second area is generally known, the area information of each third area may be determined as the area information of the second area associated with the third area. In other embodiments of the present application, the area information may be obtained by recognizing the scene information by the first earphone after the user enters the third area.
In some embodiments of the present application, when the first real-time location is located in the third area, the first earphone may process the first earphone according to the acquired area information of the third area.
Step S503, processing the voice data according to the region information, and playing the processed voice data.
In some embodiments of the present application, the voice data may be processed differently according to the specific content of the region information. Further, for the different third regions, the region information different for each third region may be processed differently for the voice data.
Specifically, in some embodiments of the present application, the area information may include a privacy level of the third area; the privacy level is used for indicating the privacy security of the third area, and if the privacy level is high, the privacy security corresponding to the third area is high, and if the privacy level is low, the privacy security corresponding to the third area is low. Since each second area has information such as corresponding functions, traffic, and the like, the privacy level thereof may be determined in advance, and thus, the privacy level of the third area may be determined as the privacy level corresponding to the second area. In this case, the processing of the voice data based on the region information may include: and if the privacy level is lower than the preset privacy level, identifying the key words in the voice data, and carrying out silencing treatment on the key words in the voice data.
Wherein, the above-mentioned privacy class of predetermineeing can be adjusted according to actual need. The keywords are words with high privacy. And when the privacy level of the third area is lower than the preset privacy level, the third area does not meet the requirement of privacy protection, and the voice data after the voice processing is played after words with higher privacy in the voice data are required to be silenced.
It should be noted that, if the privacy level of the third area is equal to or higher than the preset privacy level, it indicates that the third area meets the requirement of privacy protection, and the voice data may not be processed.
Continuing with fig. 6, because the flow of people is large in the foreground 62, and there are many outside people, and the privacy level thereof is lower than the preset privacy level, the privacy level of the third area 65 is determined as the privacy level of the foreground 62, that is, the privacy level of the third area 65 corresponding to the foreground 62 is lower than the preset privacy level, therefore, when the first real-time position of the first earphone is located in the third area 65, the first earphone needs to identify the keyword in the voice data, and perform the silencing process on the keyword in the voice data, and then play the voice data after the silencing process, so as to avoid sending privacy leakage in the third area 65. The privacy level of the general manager office 63 is higher than the preset privacy level, and the privacy level of the third area 66 is determined as the privacy level of the general manager office 63, that is, the privacy level of the third area 66 corresponding to the general manager office 63 is higher than the preset privacy level, so that when the first real-time position of the first earphone is located in the third area, the first earphone can directly play the voice data.
Further, in some embodiments of the present application, if the privacy level is equal to or higher than a preset privacy level, the first earphone may further identify the number of people in the third area, and if the number of people is greater than the preset number of people, identify a keyword in the voice data, and perform a mute process on the keyword in the voice data.
The preset number of people refers to the maximum number of people allowed in the third area under the condition that the privacy protection requirement is met, and the specific value of the preset number of people can be adjusted according to the actual situation.
The identification mode of the number of the people in the third area can be selected according to actual conditions.
For example, in some embodiments of the present application, the persons in the third area may be identified by means of image recognition, and the number of persons in the third area is counted; the laser radar may also be used to identify the outline of the person and determine the number of persons in the third area based on the outline of the person. Or, in other embodiments of the present application, the noise level of the current environment may be collected by a microphone of the first earphone, and the number of people in the third area may be estimated according to the noise level of the environment.
If the number of people in the third area is larger than the preset number of people, the fact that the current flow of people in the third area is large is proved, and the problem of privacy disclosure is easy to occur, therefore, the first earphone can identify the keywords in the voice data and carry out silencing treatment on the keywords in the voice data.
In the embodiment of the application, according to the privacy level of the third area, the first earphone can perform silencing processing on the keywords in the voice data when the privacy level is lower than the preset privacy level, or when the privacy level is higher than or equal to the preset privacy level and the number of people is larger than the preset number of people, and plays the voice data after the silencing processing, so that privacy disclosure in the third area can be avoided, and the privacy security of voice playing is improved.
In other embodiments of the present application, the region information may further include a noise level of the third region, and the first headphone may perform noise reduction processing on the voice data according to the noise level of the third region, or may increase a volume of the voice data when the noise level of the third region is greater than a preset noise level threshold.
In an embodiment of the application, if the first real-time location is located in the third area, the first earphone acquires area information of the third area, processes the voice data according to the area information, and plays the processed voice data. The first earphone can process different voice data when the first earphone is located in different third areas according to different area information, and using experience of a user can be improved under the condition that privacy safety is guaranteed.
In order to further improve the privacy security, if the first real-time position of the first earphone meets the position condition, the voice data can be further processed according to the actual situation before being played.
Specifically, in some embodiments of the present application, the playing the voice data may include: the method comprises the steps of obtaining a first authority of a first wearer, then processing voice data according to the first authority, and playing the processed voice data.
In some embodiments of the present application, after identifying the first wearer, a first right of the first wearer may be obtained, where the first right refers to a right of the first wearer to listen to private information related to the voice data. That is, the higher the first right, the more private information the first wearer can listen to; the lower the first permission, the less private information the first wearer may listen to. Therefore, the processing the voice data according to the first authority may include: recognizing keywords in the voice data; and according to the first authority, carrying out silencing treatment on the keywords in the voice data. And the silencing results for the keywords in the voice data can be different according to different first authorities.
Specifically, the first earpiece may store predetermined minimum permission requirements associated with different keywords, and then perform comparison according to the first permission and the minimum permission requirement associated with each keyword. If the first authority is equal to or higher than the minimum authority requirement associated with the keyword, silencing the keyword can be avoided; if the first authority is lower than the minimum authority requirement associated with the keyword, silencing of the keyword is required.
The practical application scene is used for explaining, in the process of holding a conference, a plurality of first earphones can be connected with each other, at the moment, a conference host sends voice data to other first earphones through the earphones or the mobile phone, and each first earphone can process the voice data respectively according to the first permission of the first wearer of the first earphone. For example, the voice data includes a keyword X, a keyword Y, and a keyword Z, where the minimum authority requirement associated with the keyword X is level 5, the minimum authority requirement associated with the keyword Y is level 3, and the minimum authority requirement associated with the keyword Z is level 1. Wherein, the privacy importance degree represented by the level 5 is higher than the level 3, and the privacy importance degree represented by the level 3 is higher than the level 1. If the first wearer is the chief, the first right of the first wearer may be level 2, and at this time, the first headphone worn by the first wearer needs to perform silencing processing on the keyword X and the keyword Y. If the first wearer is a manager, the first right of the first wearer may be level 4, and at this time, the first earphone worn by the first wearer needs to perform silencing processing on the keyword X.
In the embodiment of the application, for the same voice data, different first earphones can adjust the voice data based on the first permission of the current first wearer, so that the user with low permission can not hear the private information with high privacy importance, and the privacy security of voice playing is improved.
In other embodiments of the present application, the voice data may include voice data respectively transmitted by the plurality of second earphones.
Specifically, the first earphone may be in communication connection with a plurality of second earphones respectively. For example, as shown in fig. 7, during the conference, the earphones (e.g., the earphone 71, the earphone 72, the earphone 73, the earphone 74, and the earphone 75 in fig. 7) worn by each participant may be communicatively connected to each other, in which case the earphone worn by the participant who is speaking is the second earphone, in which case there may be multiple participants who are speaking simultaneously during the conference, in which case there may be multiple second earphones (e.g., the earphone 72, the earphone 73, the earphone 74, and the earphone 75 in fig. 7), and the earphone 71 (the first earphone) of the first wearer (one of the participants) may receive the voice data sent by the earphone 72, the earphone 73, the earphone 74, and the earphone 75 (multiple second earphones).
At this time, as shown in fig. 8, in some embodiments of the present application, the above-mentioned playing of the voice data may include the following steps S801 to S803.
Step S801, acquiring the priority of the second wearer associated with each voice data.
The priority is the importance of the voice data of the second wearer, and the second wearer is the wearer of the second headset. Generally, the priority may be determined according to information such as the position, age, etc. of the second wearer.
In some embodiments of the present application, after receiving the respective voice data, a second wearer corresponding to each voice data may be identified based on a pre-established voice database.
In further embodiments of the application, the second earpiece may also transmit information of the second wearer to the first earpiece together with the voice data after identifying the second wearer of the second earpiece. For example, information such as the second wearer's ID, the second wearer's job title, etc. is transmitted to the first headset together with voice data. At this time, the first headset may determine the priority of the second wearer of each voice data according to the received information.
Step S802, according to the priority, sequencing each voice data to obtain a voice playing sequence.
The voice playing sequence refers to the order of playing each voice data.
In some embodiments of the present application, the respective voice data may be sorted based on different priorities, and in general, the voice data with higher priority is played with priority the higher voice data is in the order of the voice playing sequence.
Step S803, playing each voice data according to the voice playing sequence.
Explaining by using an actual application scene, in a conference process, if three voice data are obtained, wherein a second wearer associated with first voice data is a manager, a second wearer associated with second voice data is a captain, and a second wearer associated with third voice data is an officer, according to the role of the second wearer, the priority of the second wearer associated with the first voice data is greater than that of the second wearer associated with the second voice data, and the priority of the second wearer associated with the second voice data is greater than that of the second wearer associated with the third voice data, therefore, the voice playing sequence is that the first voice data is played first, then the second voice data is played, and then the third voice data is played.
In the embodiment of the application, when received voice data is a plurality of, first earphone can be according to the priority, sequences each voice data, obtains pronunciation broadcast order, then according to pronunciation broadcast order, plays each voice data for each voice data can be broadcast according to the important degree, avoids playing voice data simultaneously and causes the condition that first person of wearing can not hear clearly the pronunciation content in listening to the in-process. Since the voice playing order is determined according to the priority of the second wearer from which the voice data originates, the higher the priority of the second wearer is, the more the voice data is played preferentially, and the first wearer can preferentially hear the voice data of the second wearer having the higher priority.
In other embodiments of the present application, the playing the voice data may further include: detecting whether the first earphone meets a safety condition; and if the first earphone does not meet the safety condition, processing the voice data and playing the processed voice data.
The safety condition is whether the current state of the first earphone meets the requirement of privacy protection. The security condition may be associated with the first wearer himself or with other persons. When the first earphone does not meet the security condition, it is indicated that the state of the first earphone does not meet the requirement of privacy protection, and therefore, the voice data needs to be processed and played. When the first earphone meets the safety condition, the state of the first earphone meets the requirement of privacy protection, and therefore voice data can be directly played.
Specifically, in some embodiments of the present application, the first earphone may acquire identity information of a person whose distance from the first earphone is smaller than a distance threshold, and determine whether the first earphone is in a safe state according to the identity information of the person.
The identity information may include whether the person is an inside person in the current scene, whether the person has a permission to listen to voice, and the like, and the distance threshold may be adjusted according to an actual situation.
To illustrate with a specific application scenario, when a first wearer listens to voice data in a workstation area of the first wearer, another person may appear in the workstation area, for example, another employee comes to the workstation area of the first wearer and wishes to talk with the first wearer, and a distance between the employee and a first earphone is smaller than a distance threshold. If the first wearer uses the earphone with a relatively large volume, the employee may listen to the voice data transmitted from the earphone. Therefore, the first earphone can identify other persons appearing in the workstation area, identify whether the person has the authority of listening to the voice, and confirm that the first earphone does not meet the safety condition if the person does not have the authority of listening to the voice. At this time, the first headphone may mute the keywords in the voice data or not play the voice data.
In particular, in some embodiments of the present application, the person having a distance from the first earpiece that is less than the distance threshold may wear a headset that has a communication connection with the first earpiece. In this case, the playing the voice data may include: detecting whether a third wearer has permission to listen to the voice data; and if the third wearer does not have the authority of listening to the voice data, processing the voice data and playing the processed voice data.
The third wearer is a wearer of the third earphone, the third earphone is an earphone which is in communication connection with the first earphone, and the distance between the third earphone and the first earphone is smaller than the distance threshold value.
That is, if a person whose distance from the first earphone is less than a preset distance wears the third earphone, the right of whether the third wearer listens to the voice data may be detected according to the communication connection between the third earphone and the first earphone. The permission can be directly sent to the first earphone by the third earphone, and the first earphone does not need to identify the third wearer at the moment, so that the confirmation of the permission of the third wearer can be completed.
The foregoing scenario illustrates that if other employees entering the work area of the first wearer are also in the conference, the third earpiece is worn, and the third earpiece can identify the third wearer (i.e., the other employees entering the work area of the first wearer) when the third earpiece is used, and play the voice data according to whether the third wearer has the right to listen to the voice data. And the first earphone can detect whether the third wearer has the authority to listen to the voice data through the communication connection with the third earphone. For example, an indication message is sent to the third earphone, and the permission information of the third wearer sent by the third earphone after receiving the indication message is received, and whether the third wearer has the permission to listen to the voice data can be confirmed according to the permission information.
If the third wearer does not have the permission to listen to the data, the first earphone does not satisfy the safety condition, and at the moment, the first earphone can process the keywords in the voice data and play the processed voice data. If the third wearer has the right to listen to the voice data, the first earphone meets the safety condition, and the voice data can be directly played. The processing may be to perform a silencing process on the keywords in the voice data.
The embodiment of the application detects whether a third wearer has the authority to listen to the voice data; if the third wearer does not have the authority of listening to the voice data, the voice data are processed, the processed voice data are played, and when the third wearer who does not have the authority of listening to the voice data appears in the first area, the voice data are processed, so that the third wearer is prevented from listening to the voice data, and the privacy security of voice playing is guaranteed.
In other embodiments of the present application, the first earphone may further detect a relative position relationship between the first earphone and the ear of the first wearer, determine whether the first earphone is in a wearing state based on the relative position relationship, and confirm that the first earphone does not satisfy the safety condition if the first earphone is not in the wearing state.
The relative position relationship may be obtained in various manners, for example, by using a laser sensor and an image recognition, or may be determined by using an attitude sensor to detect a motion trajectory of the posture sensor, so as to determine the relative position relationship with the first wearer.
Based on the above-mentioned relative position relationship, it can be determined whether the first earphone is in a wearing state, if the first earphone is not in the wearing state, since the first earphone may be lost and picked up by others, at this time, it should be confirmed that the first earphone does not satisfy the security condition, and the voice data is processed. If the first earphone is in a wearing state, the voice data can be played normally.
In the embodiment of the application, whether the first earphone meets the safety condition is detected; if the first earphone does not meet the safety condition, the voice data is processed, and the processed voice data is played, so that the privacy safety of voice playing can be further improved.
In other embodiments of the present application, the voice data includes a plurality of voice commands, and the playing order of each voice command is different. At this time, as shown in fig. 9, the above-described playing of the voice data may further include the following steps S901 to S903.
Step S901, identifying a target voice command in the plurality of voice commands, and obtaining a playing order of the target voice command.
The target voice instruction is a voice instruction associated with the first earphone.
Specifically, after receiving the voice data, the first headset may recognize the voice command included in the voice data, further recognize the voice command associated with the first wearer from the voice commands based on the information of the first wearer, and then determine the voice command associated with the first wearer as the target voice command. Then, the playing order of the target voice instruction in the plurality of voice data may be acquired.
Step S902, if the playing sequence of the target voice command is the top playing, playing the target voice command, and sending a command playing prompt message to the next earphone of the first earphone after the playing of the target voice command is completed.
And the next earphone of the first earphone is the earphone which needs to play the next voice instruction corresponding to the target voice instruction.
Step S903, if the playing sequence of the target instruction is non-head playing, the target voice instruction is played after receiving an instruction playing prompt message sent by a previous earphone of the first earphone, and if the playing sequence of the target instruction is non-end playing, the instruction playing prompt message is sent to a next earphone of the first earphone after the target voice instruction is played.
And the previous earphone of the first earphone is the earphone which needs to play the previous voice instruction corresponding to the target voice instruction.
That is, based on the plurality of voice commands, the first earphones can play according to the order of the target voice commands, and for a scenario with a plurality of first earphones, each first earphone can play the associated target voice commands in sequence.
For example, when the voice data sent by the conference host includes three voice commands, including a voice command D, a voice command E, and a voice command F. The first earphone H worn by the user G recognizes that the target voice instruction is a voice instruction D, and the playing sequence is first playing, the first earphone H plays the target voice instruction D, and sends instruction playing prompt information to the next earphone of the first earphone (the first earphone J worn by the user I) after the voice instruction D is played. The first earphone J recognizes that the target voice instruction is a voice instruction E, and the playing sequence is non-head playing, the voice instruction E is played after receiving instruction playing prompt information sent by a previous earphone (first earphone H) of the first earphone, and the first earphone J sends instruction playing prompt information to a next earphone (first earphone L worn by the user K) of the first earphone after finishing playing of the target voice instruction because the playing sequence of the target instruction is non-end playing. The first earphone L recognizes that the target voice command is the voice command F, and the playing sequence is non-head playing, and after receiving the command playing prompt message sent by the previous earphone (first earphone J) of the first earphone, the first earphone plays the voice command F, and since the playing sequence of the target command is last playing, the whole voice data playing is finished.
In the embodiment of the application, by recognizing the target voice command in the plurality of voice commands, the first earphone can play only the target voice command associated with the first earphone, and the whole playing process is in a certain order. Under the scene of multiple earphones, the wearer of each first earphone cannot hear the voice instructions of other wearers, and the privacy safety of voice playing can be improved. Meanwhile, the playing of each voice instruction has a certain sequence, which is beneficial to meeting the time sequence of a plurality of voice instructions and improving the execution efficiency of the voice instructions.
Fig. 10 is a schematic flow chart illustrating an implementation process of a voice sending method provided in an embodiment of the present application, where the method can be applied to a second headset and is applicable to a situation where voice playing privacy security needs to be improved. The second earphone may be an ear plug earphone, a headphone, an in-ear earphone, or a headset.
Specifically, the voice transmission method may include the following steps S1001 to S1002.
Step S1001 acquires voice data.
The voice data refers to data that the second earphone needs to send to the first earphone. The voice data may be obtained in different manners.
Specifically, the voice data may include at least one of first voice data transmitted by the first terminal and second voice data collected by a microphone on the second headset. The first terminal is a terminal establishing communication connection with the second earphone, and may be a mobile phone or a computer, for example.
That is, the second headset may collect voice data according to a microphone configured in the second headset, and may receive the voice data transmitted from the first terminal by establishing a communication connection with the first terminal.
Step S1002, establish a communication connection with the first headset, and send the voice data to the first headset.
In an embodiment of the application, after the data is acquired, the second earphone establishes a communication connection with the first earphone, and sends the voice data to the first earphone based on the communication connection. After receiving the data sent by the second headset, the first headset may play the voice data according to the voice playing method described in fig. 1 to 9.
In the embodiment of the application, the second earphone can acquire the voice data, establish communication connection with the first earphone, and send the voice data to the first earphone, so that the first earphone can play the voice data according to the voice playing method provided by the application, the voice data can be played by the first earphone under the condition that the privacy protection requirement is met, and the privacy safety of voice playing is improved.
Since the voice data may contain both the first voice data and the second voice data, i.e. both the first voice data transmitted by the first terminal and the second voice data collected by the microphone on the second headset. For example, the presenter wearing the second headset is explaining the creative idea of the video being played on the first terminal, and at this time, the voice data acquired by the second headset may include the first voice data from the video on the first terminal and the words (i.e., the second voice data) spoken by the presenter and collected by the microphone.
If the first voice data and the second voice data are not processed, the difference between the first voice data and the second voice data may be very large, and after the voice data are sent to the first earphone and the first earphone plays the voice data, a first wearer of the first earphone may not hear the content of any one of the first voice data and the second voice data.
Therefore, in some embodiments of the present application, as shown in fig. 11, before sending voice data to the first headset, the method may further include: step S1101 and step S1102.
In step S1101, a first sound effect of the first voice data and a second sound effect of the second voice data are detected.
The attributes of the first and second audio effects related to the playing effect of the voice data may be, for example, loudness, sampling rate, bit rate, channel number, or sampling precision, and these attributes may be obtained according to parameters of sound such as amplitude and frequency of the sound in the process of obtaining the voice data.
Step S1102, based on the first sound effect and the second sound effect, perform sound effect adjustment operation on at least one of the first voice data and the second voice data.
Specifically, when the difference between the first sound effect and the second sound effect is greater than the difference threshold, the second earphone can perform sound effect adjustment operation on at least one of the first voice data and the second voice data, so that the sound effects of the first voice data and the second voice data are approximately the same.
It should be noted that, if the difference between the first sound effect and the second sound effect is less than or equal to the difference threshold, the second earphone may not perform sound effect adjustment operation on the first voice data and the second voice data.
Wherein, the difference threshold value can be adjusted according to the specific attribute of the sound effect.
In the embodiment of the application, the first sound effect of the first voice data and the second sound effect of the second voice data are detected, and based on the first sound effect and the second sound effect, sound effect adjustment operation is carried out on at least one voice data in the first voice data and the second voice data, so that the first sound effect and the second sound effect can be as close as possible, and then when the first earphone plays the voice data, a first wearer of the first earphone can hear two different voice data.
For example, the first sound effect and the second sound effect refer to loudness of voice data, if the loudness of the first voice data is much greater than the loudness of the second voice data, the first wearer may not hear the second voice data when the first earphone receives and plays the voice data, and based on the method provided by the present application, the first wearer can hear the first voice data and the second voice data by adjusting the loudness of the first voice data or the second voice data.
In other embodiments of the present application, if the voice data includes both the first voice data and the second voice data, the second earphone may further process the first voice data and the second voice data to obtain processed voice data, and in the processed voice data, one of the first voice data and the second voice data corresponds to each of the left channel and the right channel.
That is to say, after receiving the processed voice data, the first headphone plays the processed voice data, a left channel of the first headphone plays one of the first voice data and the second voice data, and a right channel of the first headphone plays another voice data different from the left channel.
At this moment, the first wearer can only wear one earphone according to the self condition, namely only listen to the voice data of one sound channel, so that the first wearer can conveniently acquire any one of the first voice data or the second voice data, and the use experience of the first wearer is improved.
In other embodiments of the present application, if the voice data includes the first voice data, the voice data may include some private content because the first voice data is the voice data transmitted by the first terminal. In order to prevent privacy leakage, as shown in fig. 12, in some embodiments of the present application, the transmitting of the voice data to the first headset may further include the following steps S1201 to S1203.
Step S1201, obtaining source information associated with the first voice data.
The source information is information related to a sound source of the first voice data. For example, the source information may refer to information such as the name and position of the call partner. For example, the source information may be information such as a version number and a security level of a video being played by the terminal.
Step S1202, determining whether the first voice data is internal information that cannot be sent to the outside according to the source information.
Since the second earphone is connected to the first terminal in some embodiments of the present application, there may be some situations in the first terminal, such as the first terminal accessing a telephone, or the first terminal playing a video with a higher security level due to a user's misoperation. In order to prevent the first voice data (e.g., call content, video content, etc.) of the first terminal from being leaked in such a case, the second terminal may determine whether the first voice data is internal information that cannot be transmitted to the outside, based on the source information.
Specifically, the second headset may determine whether the conversation content (i.e., the received first voice data) of the call target is an internal message according to the position of the call target. Or, the second earphone may determine whether the information of the call object is related to the scene information of the current scene according to the name of the call object, and if the information of the call object is not related to the scene information of the current scene, the second earphone may determine that the first voice data is internal information that cannot be sent to the outside. For example, whether the call target is a participant of the current conference is determined according to the name of the call target, and if the call target is not a participant of the current conference, the first voice data is determined as the internal information which can not be sent out. Or, the second earphone may determine whether the first voice data of the video is the internal information that cannot be sent out according to the version number of the video.
In step S1203, if the first voice data is not the internal information, the first voice data is sent to the first earphone.
In some embodiments of the present application, if the first voice data is internal information, which indicates that the first voice data is not available for being sent to other first earphones, that is, the first voice data is private information and cannot be listened to by the first wearer of the first earphone, the second earphone may not send the first voice data to the first earphone. If the first voice data is not internal information, indicating that the first voice data can be sent to other first earphones, namely the first voice data can be listened to by the first wearer of the first earphone, the second earphone can send the first voice data to the first earphone.
In the embodiment of the application, whether the first voice data is the internal information which cannot be sent outwards is determined by acquiring the source information associated with the first voice data and according to the source information, and if the first voice data is not the internal information, the first voice data is sent to the first earphone, so that the internal information is not leaked to the first earphone, and the privacy safety is improved.
It should be noted that the execution main body of the voice playing method and the execution main body of the voice sending method may be the same earphone, that is, one earphone may be used as both a first earphone for receiving and playing the voice data sent by other terminals, and may also be used as a second earphone for sending the voice data to other first earphones.
Taking fig. 7 as an example, where the headset 71, the headset 72, the headset 73, the headset 74, and the headset 75 have all established communication connections, at this time, any one of the headset 71, the headset 72, the headset 73, the headset 74, and the headset 75 may be used as a first headset to receive voice data sent by other headsets. Similarly, any one of the headphones 71, 72, 73, 74, and 75 may be used as a second headphone to transmit the acquired voice data to another headphone.
It should be noted that, for simplicity of description, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders.
Fig. 13 is a schematic structural diagram of a voice playing apparatus 1300 according to an embodiment of the present application, where the voice playing apparatus 1300 is configured on a first earphone. The voice playing apparatus 1300 may include: an acquisition unit 1301, a detection unit 1302, and a playback unit 1303.
An obtaining unit 1301, configured to obtain voice data to be played;
a detecting unit 1302, configured to obtain a first real-time position where the first earphone is located, and detect whether the first real-time position meets a preset position condition;
and a playing unit 1303, configured to play the voice data if the first real-time position meets the position condition.
In some embodiments of the present application, the detecting unit 1302 may be specifically configured to: identifying a first wearer of the first earphone and acquiring a first area associated with the first wearer, wherein the first area is an area allowing the first earphone to play the voice data; detecting whether the first real-time location is within the first area; and if the first real-time position is located in the first area, confirming that the first real-time position meets the position condition.
In some embodiments of the present application, the detecting unit 1302 may be specifically configured to: obtaining a voice instruction associated with the first wearer; according to the voice instruction, identifying a target position pointed by the voice instruction; determining an audible path of the first earphone according to the first real-time position and the target position; and determining the first area according to the audible path.
In some embodiments of the present application, the detecting unit 1302 may be specifically configured to: determining a third area in the first area according to a second area planned in advance, wherein the third area is a superposition area of the first area and the second area; if the first real-time position is located in the third area, acquiring area information of the third area; and processing the voice data according to the region information, and playing the processed voice data.
In some embodiments of the present application, the region information includes a privacy level of the third region; the detection unit 1302 may be specifically configured to: and if the privacy grade is lower than a preset privacy grade, identifying the keywords in the voice data, and carrying out silencing treatment on the keywords in the voice data.
In some embodiments of the present application, the voice data includes voice data respectively transmitted by a plurality of second earphones; the playing unit 1302 may be specifically configured to: acquiring the priority of a second wearer associated with each voice data; the second wearer is a wearer of a second headset; sequencing the voice data according to the priority to obtain a voice playing sequence; and playing each voice data according to the voice playing sequence.
In some embodiments of the present application, the playing unit 1302 may be specifically configured to: detecting whether the first earphone meets a safety condition; and if the first earphone does not meet the safety condition, processing the voice data and playing the processed voice data.
In some embodiments of the present application, the playing unit 1302 may be specifically configured to: detecting whether a third wearer has permission to listen to the voice data; the third wearer is a wearer of a third headset, the third headset is a headset having a communication connection with the first headset, and a distance between the third headset and the first headset is less than a distance threshold; and if the third wearer does not have the permission to listen to the voice data, processing the voice data and playing the processed voice data.
In some embodiments of the present application, the voice data includes a plurality of voice commands, and the playing order of each voice command is different; the playing unit 1302 may be specifically configured to: identifying a target voice instruction in the plurality of voice instructions and acquiring a playing sequence of the target voice instruction; the target voice instruction is a voice instruction associated with the first headset; if the playing sequence of the target voice instruction is the first-order playing, playing the target voice instruction, and sending instruction playing prompt information to the next earphone of the first earphone after the target voice instruction is played; the next earphone of the first earphone is an earphone which needs to play the next voice instruction corresponding to the target voice instruction; if the playing sequence of the target instruction is non-head playing, playing the target voice instruction after receiving instruction playing prompt information sent by a previous earphone of the first earphone, and if the playing sequence of the target instruction is non-end playing, sending instruction playing prompt information to a next earphone of the first earphone after finishing playing the target voice instruction, wherein the previous earphone of the first earphone is an earphone needing to play a previous voice instruction corresponding to the target voice instruction.
It should be noted that, for convenience and simplicity of description, the specific working process of the voice playing apparatus 1300 may refer to the corresponding process of the method described in fig. 1 to fig. 9, and is not described herein again.
Fig. 14 is a schematic structural diagram of a voice sending apparatus 1400 according to an embodiment of the present application, in which the voice playing apparatus 1400 is configured on a second earphone. The voice transmission apparatus 1400 may include: an acquisition unit 1401 and a transmission unit 1402.
An acquisition unit 1401 for acquiring voice data;
a sending unit 1402, configured to establish a communication connection with a first headset, and send voice data to the first headset, where the voice data is played by the first headset according to the voice playing method described in the foregoing fig. 1 to fig. 9.
In some embodiments of the present application, the voice data includes at least one of first voice data transmitted by the first terminal and second voice data collected by a microphone on the second headset; wherein the first terminal is a terminal establishing a communication connection with the second headset.
In some embodiments of the present application, the voice data includes first voice data and second voice data, and the voice sending apparatus 1400 further includes a sound effect adjusting unit, configured to: detecting a first sound effect of the first voice data and a second sound effect of the second voice data; and based on the first sound effect and the second sound effect, carrying out sound effect adjustment operation on at least one voice data in the first voice data and the second voice data.
In some embodiments of the present application, the voice data includes the first voice data, and the sending unit 1402 is further configured to: acquiring source information related to the first voice data; determining whether the first voice data is internal information which cannot be sent outwards or not according to the source information; and if the first voice data is not the internal information, sending the first voice data to the first earphone.
It should be noted that, for convenience and simplicity of description, the specific working process of the voice sending apparatus 1400 may refer to the corresponding process of the method described in fig. 10 to fig. 12, and is not described herein again.
Fig. 15 is a schematic view of an earphone according to an embodiment of the present application. The earphone 15 may include: a processor 150, a memory 151 and a computer program 152, such as a voice playback program, stored in said memory 151 and operable on said processor 150. The processor 150 executes the computer program 152 to implement the steps in the above-mentioned voice playing method embodiments, such as the steps S101 to S103 shown in fig. 1. Alternatively, the processor 150 implements the steps in the above-mentioned embodiments of the voice transmission method, such as steps S1001 to S1002 shown in fig. 10, when executing the computer program 152.
The processor 150, when executing the computer program 152, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the acquisition unit 1301, the detection unit 1302, and the playing unit 1303 shown in fig. 13. Alternatively, the processor 150, when executing the computer program 152, implements the functions of each module/unit in each device embodiment described above, for example, the functions of the acquisition unit 1401 and the transmission unit 1402 shown in fig. 14.
The computer program may be partitioned into one or more modules/units that are stored in the memory 151 and executed by the processor 150 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the headset.
For example, the computer program may be divided into: the device comprises an acquisition unit, a detection unit and a playing unit. The device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring voice data to be played; the detection unit is used for acquiring a first real-time position of the first earphone and detecting whether the first real-time position meets a preset position condition; and the playing unit is used for playing the voice data if the first real-time position meets the position condition. For another example, the computer program may be divided into: an acquisition unit and a transmission unit. The device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring voice data; a sending unit, configured to establish a communication connection with a first headset, and send voice data to the first headset, where the voice data is played by the first headset according to the voice playing method described in the foregoing fig. 1 to fig. 9.
The headset may include, but is not limited to, a processor 150, a memory 151. It will be appreciated by those skilled in the art that fig. 15 is merely an example of a headset and is not intended to be limiting and may include more or fewer components than shown, or some components in combination, or different components, for example the headset may also include input output devices, network access devices, buses, etc.
The Processor 150 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 151 may be an internal storage unit of the headset, such as a hard disk or a memory of the headset. The memory 151 may also be an external storage device of the earphone, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the earphone. Further, the memory 151 may also include both an internal storage unit of the headset and an external storage device. The memory 151 is used to store the computer program and other programs and data required by the headset. The memory 151 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/headset and method may be implemented in other ways. For example, the above-described device/headset embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (13)

1. A voice playing method is applied to a first earphone and comprises the following steps:
acquiring voice data to be played;
acquiring a first real-time position of the first earphone, and detecting whether the first real-time position meets a preset position condition;
if the first real-time position meets the position condition, playing the voice data;
wherein, whether the first real-time position meets the preset position condition or not is detected, and the method comprises the following steps:
identifying a first wearer of the first earphone and acquiring a first area associated with the first wearer, wherein the first area is an area allowing the first earphone to play the voice data;
detecting whether the first real-time location is within the first area;
if the first real-time position is located in the first area, confirming that the first real-time position meets the position condition;
wherein said obtaining a first region associated with the first wearer comprises:
obtaining a voice instruction associated with the first wearer;
according to the voice instruction, identifying a target position pointed by the voice instruction;
determining an audible path of the first earphone according to the first real-time position and the target position;
and determining the first area according to the audible path.
2. The voice playing method according to claim 1, wherein said playing the voice data comprises:
determining a third area in the first area according to a second area planned in advance, wherein the third area is a superposition area of the first area and the second area;
if the first real-time position is located in the third area, acquiring area information of the third area;
and processing the voice data according to the region information, and playing the processed voice data.
3. The voice playback method according to claim 2, wherein the zone information contains a privacy level of the third zone;
the processing the voice data according to the region information includes:
and if the privacy grade is lower than a preset privacy grade, identifying the keywords in the voice data, and carrying out silencing treatment on the keywords in the voice data.
4. The speech playback method according to claim 3, wherein if the privacy level is equal to or higher than the preset privacy level, the number of persons in the third area is recognized, and if the number of persons is larger than the preset number of persons, the keyword in the speech data is recognized and the keyword in the speech data is silenced.
5. The voice playback method according to claim 1, wherein the voice data includes voice data transmitted by a plurality of second earphones, respectively;
the playing the voice data includes:
acquiring the priority of a second wearer associated with each voice data; the second wearer is a wearer of a second headset;
sequencing the voice data according to the priority to obtain a voice playing sequence;
and playing each voice data according to the voice playing sequence.
6. The voice playback method of claim 1, wherein playing back the voice data further comprises:
detecting whether a third wearer has permission to listen to the voice data; the third wearer is a wearer of a third headset, the third headset is a headset having a communication connection with the first headset, and a distance between the third headset and the first headset is less than a distance threshold;
and if the third wearer does not have the permission to listen to the voice data, processing the voice data and playing the processed voice data.
7. The voice playback method according to claim 1, wherein the voice data includes a plurality of voice commands, and the playback order of each voice command is different;
the playing the voice data further comprises:
identifying a target voice instruction in the plurality of voice instructions and acquiring a playing sequence of the target voice instruction; the target voice instruction is a voice instruction associated with the first headset;
if the playing sequence of the target voice instruction is the first-order playing, playing the target voice instruction, and sending instruction playing prompt information to the next earphone of the first earphone after the target voice instruction is played; the next earphone of the first earphone is an earphone which needs to play the next voice instruction corresponding to the target voice instruction;
if the playing sequence of the target voice instruction is non-head playing, playing the target voice instruction after receiving instruction playing prompt information sent by a previous earphone of the first earphone, and if the playing sequence of the target voice instruction is non-end playing, sending instruction playing prompt information to a next earphone of the first earphone after finishing playing the target voice instruction, wherein the previous earphone of the first earphone is an earphone needing to play a previous voice instruction corresponding to the target voice instruction.
8. A voice transmission method applied to a second earphone, comprising:
acquiring voice data;
establishing a communication connection with a first earphone, and sending voice data to the first earphone, wherein the voice data is played by the first earphone according to the voice playing method of claim 1.
9. The voice transmission method according to claim 8, wherein the voice data includes at least one of first voice data transmitted by the first terminal and second voice data collected by a microphone on the second headset; wherein the first terminal is a terminal establishing a communication connection with the second headset.
10. The voice transmission method according to claim 9, wherein the voice data includes the first voice data and the second voice data, and before the transmitting the voice data to the first headset, the method includes:
detecting a first sound effect of the first voice data and a second sound effect of the second voice data;
and based on the first sound effect and the second sound effect, carrying out sound effect adjustment operation on at least one voice data in the first voice data and the second voice data.
11. The method of claim 9, the voice data comprising the first voice data, the sending voice data to the first headset comprising:
acquiring source information related to the first voice data;
determining whether the first voice data is internal information which cannot be sent outwards or not according to the source information;
and if the first voice data is not the internal information, sending the first voice data to the first earphone.
12. A headset comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program or implements the steps of the method according to any of claims 8 to 11 when executing the computer program.
13. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7, or which, when being executed by a processor, carries out the steps of the method according to any one of claims 8 to 11.
CN202110001135.XA 2021-01-04 2021-01-04 Voice playing method, earphone and storage medium Active CN112351364B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202110307073.5A CN112887871B (en) 2021-01-04 2021-01-04 Headset voice playing method based on permission, headset and storage medium
CN202110307077.3A CN112887872B (en) 2021-01-04 2021-01-04 Earphone voice instruction playing method, earphone and storage medium
CN202110001135.XA CN112351364B (en) 2021-01-04 2021-01-04 Voice playing method, earphone and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110001135.XA CN112351364B (en) 2021-01-04 2021-01-04 Voice playing method, earphone and storage medium

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202110307073.5A Division CN112887871B (en) 2021-01-04 2021-01-04 Headset voice playing method based on permission, headset and storage medium
CN202110307077.3A Division CN112887872B (en) 2021-01-04 2021-01-04 Earphone voice instruction playing method, earphone and storage medium

Publications (2)

Publication Number Publication Date
CN112351364A CN112351364A (en) 2021-02-09
CN112351364B true CN112351364B (en) 2021-04-16

Family

ID=74427729

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110307073.5A Active CN112887871B (en) 2021-01-04 2021-01-04 Headset voice playing method based on permission, headset and storage medium
CN202110001135.XA Active CN112351364B (en) 2021-01-04 2021-01-04 Voice playing method, earphone and storage medium
CN202110307077.3A Active CN112887872B (en) 2021-01-04 2021-01-04 Earphone voice instruction playing method, earphone and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110307073.5A Active CN112887871B (en) 2021-01-04 2021-01-04 Headset voice playing method based on permission, headset and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110307077.3A Active CN112887872B (en) 2021-01-04 2021-01-04 Earphone voice instruction playing method, earphone and storage medium

Country Status (1)

Country Link
CN (3) CN112887871B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887871B (en) * 2021-01-04 2023-06-23 深圳千岸科技股份有限公司 Headset voice playing method based on permission, headset and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114999489A (en) * 2022-06-28 2022-09-02 歌尔科技有限公司 Wearable device control method and apparatus, terminal device and storage medium
CN116229987B (en) * 2022-12-13 2023-11-21 广东保伦电子股份有限公司 Campus voice recognition method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985405A (en) * 2014-04-18 2014-08-13 青岛尚慧信息技术有限公司 Audio player
US10728655B1 (en) * 2018-12-17 2020-07-28 Facebook Technologies, Llc Customized sound field for increased privacy

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130018495A1 (en) * 2011-07-13 2013-01-17 Nokia Corporation Method and apparatus for providing content to an earpiece in accordance with a privacy filter and content selection rule
CN103391118A (en) * 2013-07-23 2013-11-13 广东欧珀移动通信有限公司 Bluetooth headset and audio sharing method by virtue of Bluetooth headsets
US10721594B2 (en) * 2014-06-26 2020-07-21 Microsoft Technology Licensing, Llc Location-based audio messaging
CN105323670A (en) * 2014-07-11 2016-02-10 西安Tcl软件开发有限公司 Terminal and directional audio signal sending method
WO2016115716A1 (en) * 2015-01-23 2016-07-28 华为技术有限公司 Voice playing method and voice playing device
CN106998397B (en) * 2016-01-25 2020-02-07 平安科技(深圳)有限公司 Voice broadcasting method and system for multiple service types
CN106791024A (en) * 2016-11-30 2017-05-31 广东欧珀移动通信有限公司 Voice messaging player method, device and terminal
CN106851450A (en) * 2016-12-26 2017-06-13 歌尔科技有限公司 A kind of wireless headset pair and electronic equipment
CN106714105A (en) * 2016-12-27 2017-05-24 广东小天才科技有限公司 Wearable equipment playing mode control method and wearable equipment
CN106686186B (en) * 2016-12-27 2019-07-02 广东小天才科技有限公司 A kind of control method for playing back and wearable device of wearable device
CN110603588A (en) * 2017-02-14 2019-12-20 爱浮诺亚股份有限公司 Method for detecting voice activity of user in communication assembly and communication assembly thereof
CN106973160A (en) * 2017-03-27 2017-07-21 广东小天才科技有限公司 A kind of method for secret protection, device and equipment
CN107182011B (en) * 2017-07-21 2024-04-05 深圳市泰衡诺科技有限公司上海分公司 Audio playing method and system, mobile terminal and WiFi earphone
CN107609371B (en) * 2017-09-04 2021-04-13 联想(北京)有限公司 Message prompting method and audio playing device
CN109639908A (en) * 2019-01-28 2019-04-16 上海与德通讯技术有限公司 A kind of bluetooth headset, anti-eavesdrop method, apparatus, equipment and medium
CN110162252A (en) * 2019-05-24 2019-08-23 北京百度网讯科技有限公司 Simultaneous interpretation system, method, mobile terminal and server
CN111586515A (en) * 2020-04-30 2020-08-25 歌尔科技有限公司 Sound monitoring method, equipment and storage medium based on wireless earphone
CN111709008A (en) * 2020-06-10 2020-09-25 上海闻泰信息技术有限公司 Earphone control method and device, electronic equipment and computer readable storage medium
CN111883128A (en) * 2020-07-31 2020-11-03 中国工商银行股份有限公司 Voice processing method and system, and voice processing device
CN112887871B (en) * 2021-01-04 2023-06-23 深圳千岸科技股份有限公司 Headset voice playing method based on permission, headset and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985405A (en) * 2014-04-18 2014-08-13 青岛尚慧信息技术有限公司 Audio player
US10728655B1 (en) * 2018-12-17 2020-07-28 Facebook Technologies, Llc Customized sound field for increased privacy

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887871B (en) * 2021-01-04 2023-06-23 深圳千岸科技股份有限公司 Headset voice playing method based on permission, headset and storage medium
CN112887872B (en) * 2021-01-04 2023-06-23 深圳千岸科技股份有限公司 Earphone voice instruction playing method, earphone and storage medium

Also Published As

Publication number Publication date
CN112887871B (en) 2023-06-23
CN112887872A (en) 2021-06-01
CN112887872B (en) 2023-06-23
CN112351364A (en) 2021-02-09
CN112887871A (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN112351364B (en) Voice playing method, earphone and storage medium
EP3202160B1 (en) Method of providing hearing assistance between users in an ad hoc network and corresponding system
US9271077B2 (en) Method and system for directional enhancement of sound using small microphone arrays
US9307331B2 (en) Hearing device with selectable perceived spatial positioning of sound sources
US9424843B2 (en) Methods and apparatus for signal sharing to improve speech understanding
CN106797508B (en) For improving the method and earphone of sound quality
JP6193844B2 (en) Hearing device with selectable perceptual spatial sound source positioning
CN106210365B (en) Videoconference method for regulation of sound volume and system
US11664042B2 (en) Voice signal enhancement for head-worn audio devices
US20210329370A1 (en) Method for providing service using earset
CN110650403A (en) Earphone device with local call environment mode
WO2014186580A1 (en) Hearing assistive device and system
CN113038337A (en) Audio playing method, wireless earphone and computer readable storage medium
EP2887695B1 (en) A hearing device with selectable perceived spatial positioning of sound sources
US10200795B2 (en) Method of operating a hearing system for conducting telephone calls and a corresponding hearing system
CN104734829A (en) An audio communication system with merging and demerging communications zones
KR20210055715A (en) Methods and systems for enhancing environmental audio signals of hearing devices and such hearing devices
US11825283B2 (en) Audio feedback for user call status awareness
EP4184507A1 (en) Headset apparatus, teleconference system, user device and teleconferencing method
CN114979880A (en) Automatic acoustic switching
Paccioretti Beyond hearing aids; Technologies to improve hearing accessibility for older adults with hearing loss.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant