WO2020132839A1 - 应用于tws耳机单双耳切换的音频数据传输方法及设备 - Google Patents

应用于tws耳机单双耳切换的音频数据传输方法及设备 Download PDF

Info

Publication number
WO2020132839A1
WO2020132839A1 PCT/CN2018/123243 CN2018123243W WO2020132839A1 WO 2020132839 A1 WO2020132839 A1 WO 2020132839A1 CN 2018123243 W CN2018123243 W CN 2018123243W WO 2020132839 A1 WO2020132839 A1 WO 2020132839A1
Authority
WO
WIPO (PCT)
Prior art keywords
cis
earplug
electronic device
audio data
cig
Prior art date
Application number
PCT/CN2018/123243
Other languages
English (en)
French (fr)
Inventor
朱宇洪
王良
郑勇
张景云
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18944882.2A priority Critical patent/EP3883259A4/en
Priority to PCT/CN2018/123243 priority patent/WO2020132839A1/zh
Priority to CN202210711245.XA priority patent/CN115190389A/zh
Priority to US17/417,700 priority patent/US11778363B2/en
Priority to CN202210712039.0A priority patent/CN115175043A/zh
Priority to CN201880098184.6A priority patent/CN112789866B/zh
Publication of WO2020132839A1 publication Critical patent/WO2020132839A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • the embodiments of the present application relate to the technical field of short-distance communication, and in particular, to an audio data transmission method and an electronic device.
  • the source device can transmit audio data (audio stream) to one or more destination devices via an isochronous (ISO) channel of Bluetooth low energy (BLE).
  • ISO isochronous
  • BLE Bluetooth low energy
  • a mobile phone can transmit audio data to the left and right earbuds of a true wireless stereo (TWS) headset through the ISO channel of BLE.
  • TWS earphone includes two earphone bodies, such as left earplug and right earplug, respectively, and there is no need for wire connection between the left and right earplugs.
  • the left and right earplugs of the TWS earphone can be used as an audio input/output device of a mobile phone, and they can be used together (called binaural state) to realize functions such as music playback or voice communication.
  • binaural state the playback level synchronization of audio data needs to be achieved, that is, the left and right earplugs need to play the received audio data at the same time.
  • either the left earplug or the right earplug of the TWS earphone can be used as the audio input/output device of a mobile phone, which can be used alone (referred to as a mono-ear state) to achieve functions such as music playback or voice communication.
  • a mono-ear state In the one-ear state, no playback-level synchronization of audio data is required.
  • the mobile phone configures the ISO channel differently.
  • the TWS earphone performs the mono-binaural switch of the earplug (such as switching from the mono-ear state to the binaural state, or the binaural state to the mono-ear state)
  • the mobile phone needs to reconfigure the ISO channel.
  • Embodiments of the present application provide an audio data transmission method and an electronic device, which can ensure normal transmission of audio data when switching between single and binaural ears.
  • an embodiment of the present application provides an audio data transmission method.
  • the TWS earphone can be configured to include two isochronous audio
  • the first connected isochronous group (CIG) of streams (connected isochronous streams, CIS) (such as the first CIS and the second CIS).
  • CCG connected isochronous group
  • CIS connected isochronous streams
  • the electronic device can activate the two CISs, and then transmit audio data through the two CISs and the left and right earplugs of the TWS headset.
  • the electronic device can activate only one CIS, and perform audio data transmission with the corresponding earplug through the CIS. If single-binaural switching occurs during music playback or voice communication, the electronic device does not need to reconfigure the CIS, as long as the corresponding CIS is activated or deactivated. In this way, the transmission of audio data will not be interrupted, which can ensure the normal transmission of audio data and improve the user experience.
  • the electronic device may activate the first CIS and the second CIS, transmit audio data through the first CIS and the first earplug, and pass the second CIS and the first Two earplugs transmit audio data.
  • the electronic device can deactivate one CIS and continue to use another CIS to transmit audio data with the corresponding earplug.
  • the electronic device activates only one CIS (such as the first CIS), and transmits audio data through the activated CIS and the corresponding earplug (such as the first earplug) .
  • the electronic device does not activate another CIS (such as the second CIS).
  • the electronic device 100 can activate another CIS and use two CISs and two earplugs for audio data transmission.
  • the first CIS and the second CIS are configured as a serial scheduling (Sequential) transmission mode or an interleaved scheduling (Interleaved) transmission the way.
  • the configured CIG parameters of the first CIG are different.
  • the first CIS and the second CIS are configured as an interleaved scheduling transmission mode.
  • the anchor point of the first CIS is the CIG anchor point of the first CIG
  • the anchor point of the second CIS is the same as the end point of the first sub-event in the CIS event of the first CIS.
  • the starting point of the second sub-event of CIS is the end point of the first sub-event of the second CIS.
  • the first CIS and the second CIS both include multiple CIS events; the first CIG includes multiple CIG events; each CIG event includes one CIS event of the first CIS and one CIS event of the second CIS; Each CIS event includes N1 sub-events, and N1 is greater than or equal to 2; each CIS event of the second CIS includes N2 sub-events, and N2 is greater than or equal to 2.
  • the electronic device transmits audio data through the first CIS and the first earplug from the anchor point of the first CIS, and the electronic device transmits audio data through the second CIS and the second earplug from the anchor point of the second CIS.
  • the first CIS and the second CIS are configured as a serially scheduled transmission method.
  • serially scheduled transmission mode reference may be made to the description of other parts of the embodiments of the present application, which will not be repeated here.
  • the advantage of the interleaved scheduling transmission method is that the electronic device can adopt the interleaving arrangement of the sub-events of the first CIS and the sub-events of the second CIS in time, That is, the audio data of the first CIS and the audio data of the second CIS can be interleaved and transmitted in time, so that the degree of interference of different CIS can be more equal, and the anti-interference performance of audio data transmission can be improved.
  • the first CIS and the second CIS are configured as a serially scheduled transmission mode.
  • the anchor point of the first CIS is the CIG anchor point of the first CIG
  • the anchor point of the second CIS is the same as the end point of the CIS event of the first CIS.
  • first CIS and second CIS are configured as an interleaved scheduling transmission method.
  • transmission mode of the interleaved scheduling reference may be made to the description of other parts of the embodiments of the present application, which will not be repeated here.
  • the serial scheduling transmission method has the advantage that the electronic device can be continuous in time (such as all sub-events in a CIS event of the first CIS in time ) Transmit audio data with an earbud (such as the first earbud). In this way, the degree of interference of CIS can be reduced, and the anti-interference performance of audio data transmission can be improved.
  • the serially scheduled transmission method can be used to spare a long continuous time for other transmissions (such as Wireless Fidelity (Wi-Fi)). In this way, mutual interference caused by frequent switching between Wi-Fi and Bluetooth to use transmission resources can be reduced.
  • Wi-Fi Wireless Fidelity
  • the first CIS and the second CIS are configured as a jointly scheduled transmission method.
  • the joint scheduling transmission method can avoid the above serial scheduling or interleaved scheduling transmission method, and the electronic device can transmit the same audio data to the left and right earplugs of the TWS headset at different times. Reduce the waste of transmission resources and improve the effective utilization of transmission resources.
  • the anchor point of the first CIS and the anchor point of the second CIS are both CIG anchor points of the first CIG.
  • the first CIG includes multiple CIG events; the CIG anchor point of the first CIG is the starting time point of the CIG event.
  • an embodiment of the present application provides an audio data transmission method, which can be used for audio data transmission between an electronic device and a TWS earphone.
  • the TWS earphone includes a first earplug and a second earplug.
  • the electronic device may transmit audio data through the first CIS of the first CIG and the first earplug, and transmit audio data through the second CIS of the first CIG and the second earplug.
  • the TWS earphone is in a binaural state, that is, a state where the first earplug and the second earplug are used together as an audio input/output device of the electronic device.
  • the electronic device can deactivate the second CIS, stop transmitting audio data through the second CIS and the second earplug, and continue through the first CIS and the first earplug Transfer audio data.
  • the first monaural state is a state in which the first earplug is used alone as the audio input/output device of the electronic device.
  • the TWS earphone when the TWS earphone is in the binaural state, the TWS earphone may also be switched from the binaural state to the second monoaural state.
  • the second monaural state is a state where the second earplug is used alone as the audio input/output device of the electronic device.
  • the electronic device may deactivate the first CIS, stop transmitting audio data through the first CIS and the first earplug, and continue to transmit audio data through the second CIS and the second earplug.
  • the electronic device may deactivate the CIS corresponding to the unused earplug (such as the second CIS) instead of Reconfigure CIS. In this way, the transmission of audio data will not be interrupted, which can ensure the normal transmission of audio data and improve the user experience.
  • the electronic device deactivates the second CIS, stops transmitting audio data through the second CIS and the second earplug, and continues to transmit audio data through the first CIS and the first earplug, That is, after the TWS earphone is switched from the binaural state to the first monaural state, the TWS earphone may also be switched from the first monaural state to the binaural state again.
  • the method in the embodiment of the present application may further include: the electronic device determines that the TWS earphone is switched from the first monaural state to the binaural state; in response to determining that the TWS earphone is switched from the first monaural state to the binaural state, the electronic device continues Audio data is transmitted through the first CIS and the first earplug, and the second CIS is activated, and audio data is transmitted through the second CIS and the second earplug.
  • the electronic device since the electronic device is configured with two CISs for the TWS earphone; therefore, when the TWS earphone is switched from the monaural state (such as the first monaural state) to the binaural state, the electronic device only needs to activate the CIS corresponding to the unused earplug (Such as the second CIS). In this way, the transmission of audio data will not be interrupted, which can ensure the normal transmission of audio data and improve the user experience.
  • the electronic device before the electronic device transmits audio data through the first CIS and the first earplug of the first CIG, it can be determined that the TWS earphone is in a binaural state;
  • the first CIG of a CIS and a second CIS configure the first CIS for the first earplug and configure the second CIS for the second earplug; activate the first CIS and the second CIS.
  • the electronic device even if the TWS earphone is switched from the binaural state to the monoaural state, the electronic device only needs to deactivate the corresponding CIS. In this way, the transmission of audio data will not be interrupted, which can ensure the normal transmission of audio data and improve the user experience.
  • the first CIS and the second CIS are configured as an interleaved scheduling transmission method .
  • the anchor point of the first CIS is the CIG anchor point of the first CIG
  • the anchor point of the second CIS is the same as the end point of the first sub-event in the CIS event of the first CIS.
  • the starting point of the second sub-event of CIS is the end point of the first sub-event of the second CIS.
  • the advantages of the interleaved scheduling transmission mode can refer to the description in the possible design mode of the first aspect, which is not repeated here in the embodiments of the present application.
  • the first CIS and the second CIS are configured as a jointly scheduled transmission method .
  • the anchor point of the first CIS and the anchor point of the second CIS are both CIG anchor points of the first CIG.
  • the binaural state for the advantages of the jointly scheduled transmission mode, reference may be made to the description in the possible design mode of the first aspect, and embodiments of the present application will not repeat them here.
  • the electronic device may receive the user's suspend operation. This suspend operation is used to trigger the TWS headset to pause playing audio data.
  • the electronic device can re-determine the TWS earphone's The current state (such as the first monaural state), and then reconfigure the first CIG for the TWS headset.
  • the reconfigured first CIG includes the reconfigured first CIS and the reconfigured second CIS.
  • the reconfigured first CIS and the reconfigured second CIS are applicable to the state after the TWS earphone is switched, such as the first monaural state.
  • the reconfigured first CIS and the reconfigured second CIS may be configured as a serially scheduled transmission mode.
  • serially scheduled transmission mode For a detailed introduction of the serially scheduled transmission mode, reference may be made to the description of other parts of the embodiments of the present application, which will not be repeated here.
  • the electronic device may configure the reconfigured first CIS for the first earplug, and activate the reconfigured first CIS, and transmit audio data with the first earplug from the anchor point of the reconfigured first CIS through the reconfigured first CIS .
  • the reconfigured second CIS is not activated in the first monaural state.
  • the audio data is suspended (ie, stopped).
  • the electronic device reconfigures CSI during the suspension of audio data, so that after the service restarts, the electronic device can transmit audio data through the reconfigured CIS. In this way, there will be no business interruption due to reconfiguration of CIS.
  • an embodiment of the present application provides an audio data transmission method, which can be used for audio data transmission between an electronic device and a TWS earphone.
  • the TWS earphone includes a first earplug and a second earplug.
  • the first CIG including the first CIS and the second CIS may be configured for the first earplug.
  • the first monaural state is a state in which the first earplug is used alone as an audio input/output device of an electronic device.
  • the electronic device may configure the first CIS for the first earplug, and activate the first CIS, and transmit audio data through the first CIS and the first earplug; the second CIS is in an inactive state in the first monaural state, that is, the second CIS is in It is not activated in the first monaural state.
  • the electronic device determines that the TWS earphone is in the mono-ear state, two CISs (first CIS and second CIS) are still configured, but in the mono-ear state, only one CIS is activated, and the other CIS is not activated.
  • the electronic device may configure the TWS earphone with the first CIG including two CISs (such as the first CIS and the second CIS). In this way, if the TWS headset is switched from the mono-ear state to the binaural state during music playback or voice communication, the electronic device does not need to reconfigure the CIS, as long as the corresponding CIS (such as the second CIS) is activated. In this way, the transmission of audio data will not be interrupted, which can ensure the normal transmission of audio data and improve the user experience.
  • the first CIG including two CISs (such as the first CIS and the second CIS).
  • the TWS earphone may also be switched from the first mono-ear state to the binaural state again.
  • the method of the embodiment of the present application may further include: the electronic device determines that the TWS earphone is switched from the first monaural state to the binaural state; in response to determining that the TWS earphone is switched from the first monaural state to the binaural state, the electronic device is activated
  • the second CIS transmits audio data through the second CIS and the second earplug, and continues to transmit audio data through the first CIS and the first earplug.
  • the electronic device since the electronic device is configured with two CISs for the TWS earphone; therefore, when the TWS earphone is switched from the monaural state (such as the first monaural state) to the binaural state, the electronic device only needs to activate the CIS corresponding to the unused earplug (Such as the second CIS). In this way, the transmission of audio data will not be interrupted, which can ensure the normal transmission of audio data and improve the user experience.
  • the TWS headset may also be switched from the binaural state to the second One ear state.
  • the electronic device determines that the TWS earphone is switched from the binaural state to the second monaural state, it can deactivate the first CIS, stop transmitting audio data through the first CIS and the first earbud, and continue to transmit audio through the second CIS and the second earbud data.
  • the first CIS and the second CIS are configured for serially scheduled transmission the way.
  • the anchor point of the first CIS is the CIG anchor point of the first CIG
  • the anchor point of the second CIS is the same as the end point of the CIS event of the first CIS.
  • the electronic device may receive the user's hang operating. This suspend operation is used to trigger the TWS headset to pause playing audio data.
  • the CIS transmission method configured by the electronic device for the earplug is not suitable for the binaural state.
  • the electronic device can re-determine the TWS headset’s The current state (such as the binaural state), and then reconfigure the first CIG for the TWS headset.
  • the reconfigured first CIG includes the reconfigured first CIS and the reconfigured second CIS.
  • the reconfigured first CIS and the reconfigured second CIS are applicable to the state after the TWS headset is switched, such as the binaural state.
  • the reconfigured first CIS and the reconfigured second CIS may be configured as a transmission mode of interleaved scheduling or joint scheduling.
  • the transmission mode of interleaved scheduling and the transmission mode of joint scheduling reference may be made to the descriptions of other parts of the embodiments of the present application, which will not be repeated here.
  • an embodiment of the present application provides an electronic device.
  • the electronic device includes: one or more processors, a memory, and a wireless communication module.
  • the memory and the wireless communication module are coupled to one or more processors, and the memory is used to store computer program code, and the computer program code includes computer instructions.
  • the electronic device executes the audio data transmission method described in any one of the first aspect to the third aspect and its possible implementation.
  • a Bluetooth communication system may include: a TWS headset and the electronic device described in the fourth aspect.
  • a computer storage medium including computer instructions, which, when run on an electronic device, cause the electronic device to perform audio as described in any one of the first aspect to the third aspect and possible implementations thereof Data transmission method.
  • the present application provides a computer program product that, when the computer program product runs on a computer, causes the computer to perform the audio data transmission method described in any one of the first aspect to the third aspect and possible implementations thereof .
  • the electronic device described in the fourth aspect, the Bluetooth communication system described in the fifth aspect, the computer storage medium described in the sixth aspect, and the computer program product described in the seventh aspect are all used to execute
  • the beneficial effects that can be achieved reference may be made to the beneficial effects in the corresponding method provided above, which will not be repeated here.
  • FIG. 1 is a schematic diagram of the composition of a communication system for audio data transmission provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of an audio data transmission principle provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of another composition of a communication system for audio data transmission provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an example of a product form of a TWS earphone provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram of a hardware structure of an earplug of a TWS earphone provided by an embodiment of the present application;
  • 6A is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • 6B is a schematic diagram illustrating the principle of an ISO channel for transmitting audio data according to an embodiment of this application;
  • FIG. 8 is a schematic flowchart of configuring a CIG and creating a CIS provided by an embodiment of the present application
  • FIG. 9 is a flowchart of an audio data transmission method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a principle of an interlaced scheduling transmission method provided by an embodiment of the present application.
  • FIG. 11 is a flowchart of another audio data transmission method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a principle of a serially scheduled transmission method provided by an embodiment of the present application.
  • FIG. 13 is a flowchart of another audio data transmission method provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a principle of a jointly scheduled transmission method provided by an embodiment of the present application.
  • Embodiments of the present application provide an audio data transmission method, which can be applied to electronic devices (such as mobile phones) and TWS headphones for audio data (audio stream) transmission.
  • electronic devices such as mobile phones
  • TWS headphones for audio data (audio stream) transmission.
  • the electronic device Before the electronic device transmits audio data with the earplugs (one or two earplugs) of the TWS headset, the electronic device can first pair with the earplugs, and then establish an asynchronous connection-oriented (ACL) link; and finally through the ACL chain
  • ACL asynchronous connection-oriented
  • the electronic device may configure the CIG for the TWS earphone according to the state in which the TWS earphone is used (such as a mono-ear state or a binaural state).
  • the electronic device configuring the ISO channel to the earplug through the ACL link specifically means that the electronic device establishes the CIS in the CIG through the ACL link.
  • the CIS is used to transmit audio data between electronic devices and earplugs.
  • the CIS is carried on the ISO channel.
  • either the left earplug or the right earplug of the TWS earphone serves as an audio input/output device of an electronic device, and can be used alone (referred to as a single ear state) to realize functions such as music playback or voice communication.
  • the TWS earphone 101 includes an earplug 101-1 and an earplug 101-2.
  • the earplug 101-1 of the TWS earphone 101 serves as an audio input/output device of an electronic device 100 (such as a mobile phone), and can be used alone to realize functions such as music playback or voice communication.
  • the electronic device 100 configures the TWS earphone 101 with a CIG including only one CIS.
  • the electronic device 100 shown in FIG. 1 may establish an ACL link with the earplug 101-1 and establish a CIS through the ACL link.
  • the CIS is used to transmit audio data between the electronic device 100 and the earplug 101-1.
  • the CIS is carried on the ISO channel of the electronic device 100 and the earplug 101-1. It should be noted that only one CIS is included in the CIG. For example, as shown in Figure 2, only one CIS event (x) is included in the CIG event (x). Only one CIS event (x+1) is included in the CIG event (x+1).
  • a CIG includes multiple CIG events. Both CIG event (x) and CIG event (x+1) are CIG events (CIG_event) in CIG.
  • CIG_event CIG events in CIG.
  • the electronic device 100 and the earplug 101-1 can transmit audio data in multiple CIG events in one CIG.
  • the electronic device 100 and the earplug 101-1 may transmit audio data in CIG events such as CIG event (x) and CIG event (x+1) in one CIG.
  • the left earplug and the right earplug of the TWS earphone are used as the audio input/output device of an electronic device and can be used together (called binaural state) to realize functions such as music playback or voice communication.
  • the earplugs 101-1 and 101-2 of the TWS earphone 101 serve as audio input/output devices of an electronic device 100 (such as a mobile phone), and can be used together to realize functions such as music playback or voice communication.
  • the electronic device 100 configures the TWS earphone 101 with a CIG including two CISs (such as CIS(1) and CIS(2)).
  • the electronic device can establish an ACL link with the left and right earplugs of the TWS earphone, respectively.
  • the electronic device may establish ACL link 1 with earplug 101-1 and ACL link 2 with earplug 101-2.
  • the electronic device 100 can establish CIS(1) through the ACL link 1, and the CIS(1) is used to transmit audio data with the earplug 101-1.
  • the CIS(1) is carried on the ISO channel 1 of the electronic device 100 and the earbud 101-1.
  • the electronic device 100 can establish the CIS(2) through the ACL link 2, and the CIS(2) is used to transmit audio data with the earplug 101-2.
  • the CIS (2) is carried on the ISO channel 2 of the electronic device 100 and the earbud 101-2.
  • the CIG includes two CISs (such as CIS(1) and CIS(2)).
  • the CIG event (x) includes the CIS (1) event (x) and the CIS (2) event (x).
  • the CIG event (x) is a CIG event in the CIG.
  • a CIG includes multiple CIG events.
  • the electronic device 100 and the earplugs 101-1 and 101-2 can transmit audio data in multiple CIG events in one CIG.
  • the CIG play point is a time point after the electronic device 100 transmits audio data.
  • the earplugs 101-1 corresponding to CIS(1) and the earplugs 101-2 corresponding to CIS(2) can simultaneously play the received audio data at the above CIG playback point, so that the two earplugs can synchronize the playback level of the audio stream ( That is, two earplugs play audio data simultaneously).
  • the electronic device 100 in the state of one ear, is configured for the TWS earphone 101 with only one CIG of CIS.
  • the electronic device 100 configures the TWS earphone 101 with a CIG including two CISs.
  • the electronic device 100 needs to reconfigure the CIS. For example, when switching from the one-ear state to the two-ear state, the electronic device 100 needs to reconfigure two CISs in one CIG for the two earplugs. And reconfiguring the CIS takes a certain amount of time, which will cause the interruption of audio data and affect the user experience.
  • the electronic device 100 when it configures the CIS for the earplugs of the TWS earphone 101, whether it is in a mono-ear state or a binaural state, it can be a TWS earphone
  • the 101 configuration includes two CIS CIGs.
  • the electronic device 100 in the binaural state, the electronic device 100 can activate the two CISs, and then perform audio data transmission with the left and right earplugs of the TWS earphone 101 through the two CISs.
  • the electronic device 100 In the mono-ear state, the electronic device 100 can activate only one CIS and perform audio data transmission with the corresponding earplug through the CIS.
  • the electronic device 100 does not need to reconfigure the CIS, as long as the corresponding CIS is activated or deactivated. In this way, the transmission of audio data will not be interrupted, which can ensure the normal transmission of audio data and improve the user experience.
  • the monaural state in the embodiment of the present application may include a first monaural state and a second monaural state.
  • the first monaural state is a state where the first earplug is used alone as the audio input/output device of the electronic device.
  • the second monaural state is a state where the second earplug is used alone as the audio input/output device of the electronic device.
  • the binaural state is a state where the first earplug and the second earplug are used together as an audio input/output device of an electronic device.
  • the first earplug is earplug 101-1
  • the second earplug is earplug 101-2.
  • the electronic device 100 may be a mobile phone (the mobile phone 100 shown in FIG. 1 or FIG. 3), a tablet computer, a desktop type, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal) computer (UMPC), netbooks, and cellular phones, personal digital assistants (personal digital assistants, PDAs), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) devices, media players, televisions and other devices,
  • PDAs personal digital assistants
  • augmented reality augmented reality, AR
  • VR virtual reality
  • the structure of the electronic device 100 may be as shown in FIG. 6A, which will be described in detail in the following embodiments.
  • FIG. 4 is a schematic diagram of a product form of a TWS earphone provided by an embodiment of the present application.
  • the TWS earphone 101 may include: an earplug 101-1, an earplug 101-2, and an earplug box 101-3.
  • the earplug box can be used to store the left and right earplugs of TWS earphones.
  • FIG. 4 only provides a schematic diagram of a product form example of a TWS earphone by way of example.
  • the product forms of peripheral devices provided by the embodiments of the present application include but are not limited to the TWS earphone 101 shown in FIG. 4.
  • FIG. 5 is a schematic structural diagram of an earplug (left earplug or right earplug) of a TWS earphone according to an embodiment of the present application.
  • the earplugs (such as earplugs 101-2) of the TWS earphone 101 may include: a processor 510, a memory 520, a sensor 530, a wireless communication module 540, a receiver 550, a microphone 560, and a power supply 570.
  • the memory 520 may be used to store application code, such as for establishing a wireless connection with another earplug of the TWS headset 101 (such as the earplug 101-2), and pairing the earplug with the electronic device 100 (such as the mobile phone 100) Application code.
  • the processor 510 may control to execute the above application program code to implement the function of the earplug of the TWS earphone in the embodiment of the present application.
  • the memory 520 may also store a Bluetooth address for uniquely identifying the earbud, and a Bluetooth address of another earbud of the TWS headset.
  • the memory 520 may also store connection data of the electronic device that has been successfully paired with the earplug before.
  • the connection data may be the Bluetooth address of the electronic device that has successfully paired with the earbud.
  • the earplug can be automatically paired with the electronic device without having to configure a connection with it, such as performing legality verification.
  • the aforementioned Bluetooth address may be a media access control (media access control, MAC) address.
  • the sensor 530 may be a distance sensor or a proximity light sensor.
  • the earplug can determine whether the earphone is worn by the user through the sensor 530.
  • the earplug may use a proximity light sensor to detect whether there is an object near the earplug, thereby determining whether the earplug is worn by the user.
  • the earplug may open the receiver 550.
  • the earplug may also include a bone conduction sensor, combined with a bone conduction earphone. Using the bone conduction sensor, the earplug can obtain the vibration signal of the vibrating bone mass of the voice part, analyze the voice signal, and realize the voice function.
  • the earplug may further include a touch sensor for detecting the user's touch operation.
  • the earplug may further include a fingerprint sensor, which is used to detect a user's fingerprint and identify the user's identity.
  • the earplug may further include an ambient light sensor, which may adaptively adjust some parameters, such as volume, according to the perceived brightness of the ambient light.
  • the wireless communication module 540 is used to support short-distance data exchange between the earplug of the TWS earphone and various electronic devices, such as the electronic device 100 described above.
  • the wireless communication module 540 may be a Bluetooth transceiver.
  • the earplugs of the TWS headset can establish a wireless connection with the electronic device 100 through the Bluetooth transceiver to achieve short-range data exchange between the two.
  • At least one receiver 550 which may also be referred to as a "handset,” may be used to convert audio electrical signals into sound signals and play them.
  • the receiver 550 may convert the received audio electrical signal into a sound signal and play it.
  • At least one microphone 560 which may also be referred to as a "microphone” or a “microphone,” is used to convert sound signals into audio electrical signals.
  • the microphone 560 can collect the user's voice signal during the user's speech (such as talking or sending a voice message) and convert it into audio signal.
  • the above audio electrical signal is the audio data in the embodiment of the present application.
  • the power supply 570 can be used to supply power to various components included in the earplugs of the TWS earphone 101.
  • the power source 570 may be a battery, such as a rechargeable battery.
  • the TWS earphone 101 will be equipped with an earplug box (eg, 101-3 shown in FIG. 4).
  • the earplug box can be used to store the left and right earplugs of TWS earphones.
  • the earplug box 101-3 can be used to store earplugs 101-1 and 101-2 of TWS earphones.
  • the earplug box can also charge the left and right earplugs of the TWS earphone 101.
  • the above earplug may further include: an input/output interface 580.
  • the input/output interface 580 may be used to provide any wired connection between the earplug of the TWS earphone and the earplug box (such as the earplug box 101-3 described above).
  • the input/output interface 580 may be an electrical connector.
  • the earplugs of the TWS earphone 101 When the earplugs of the TWS earphone 101 are placed in the earplug box, the earplugs of the TWS earphone 101 can be electrically connected to the earplug box (such as the input/output interface with the earplug box) through the electrical connector. After the electrical connection is established, the earplug box can charge the power supply 570 of the earplug of the TWS earphone. After the electrical connection is established, the earplugs of the TWS earphone 101 can also perform data communication with the earplug box.
  • the earplugs of the TWS earphone 101 can receive pairing instructions from the earplug box through this electrical connection.
  • the pairing command is used to instruct the earbuds of the TWS headset 101 to turn on the wireless communication module 540, so that the earbuds of the TWS headset 101 can use the corresponding wireless communication protocol (such as Bluetooth) to pair with the electronic device 100.
  • the wireless communication protocol such as Bluetooth
  • the earplugs of the TWS earphone 101 may not include the input/output interface 580.
  • the earplug may implement a charging or data communication function based on the wireless connection established with the earplug box through the wireless communication module 540 described above.
  • the earplug box (such as the aforementioned earplug box 101-3) may further include a processor, a memory, and other components.
  • the memory can be used to store application program code, and is controlled and executed by the processor of the earplug box to realize the function of the earplug box.
  • the processor of the earbud box can send a pairing command to the earbud of the TWS headset in response to the user's operation of opening the lid by executing the application code stored in the memory.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the earplugs of the TWS earphone 101. It may have more or fewer components than shown in FIG. 5, two or more components may be combined, or may have different component configurations.
  • the earplug may also include an indicator light (which can indicate the status of the earplug's power level, etc.), a dust filter (which can be used with the earpiece), and other components.
  • the various components shown in FIG. 5 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.
  • the structure of the left and right earplugs of the TWS earphone 101 may be the same.
  • the left and right earplugs of the TWS earphone 101 may include the components shown in FIG. 5.
  • the structure of the left and right earplugs of the TWS earphone 101 may also be different.
  • one earplug (such as the right earplug) of the TWS earphone 101 may include the components shown in FIG. 5, and the other earplug (such as the left earplug) may include other components other than the microphone 560 in FIG. 5.
  • FIG. 6A shows a schematic structural diagram of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, and a battery 142 , Antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193 , A display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an environment Light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and an image signal processor. (image)signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • the different processing units may be independent devices or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetch and execution.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal) asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /Or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 100.
  • the electronic device 100 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive the electromagnetic wave from the antenna 1 and filter, amplify, etc. the received electromagnetic wave, and transmit it to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the electronic device 100 Wireless communication solutions such as global navigation (satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR), etc.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives the electromagnetic wave via the antenna 2, frequency-modulates and filters the electromagnetic wave signal, and sends the processed signal to the processor 110.
  • the wireless communication module 160 can also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic waves through the antenna 2 to radiate it out.
  • the antenna 1 of the electronic device 100 and the mobile communication module 150 are coupled, and the antenna 2 and the wireless communication module 160 are coupled so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long-term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite-based augmentation system (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS beidou navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation system
  • the electronic device 100 may utilize the wireless communication module 160 to establish a wireless connection with a peripheral device through a wireless communication technology, such as Bluetooth (BT). Based on the established wireless connection, the electronic device 100 can send voice data to the peripheral device and can also receive voice data from the peripheral device.
  • BT Bluetooth
  • the electronic device 100 realizes a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active matrix organic light-emitting diode (active-matrix organic light-emitting diode) emitting diode, AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP processes the data fed back by the camera 193.
  • the ISP may be set in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the video codec is used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100.
  • the processor 110 may execute instructions stored in the internal memory 121, establish a wireless connection with the peripheral device through the wireless communication module 160, and perform short-range data exchange with the peripheral device to pass the peripheral device Realize functions such as calling and playing music.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area may store an operating system, at least one function required application programs (such as sound playback function, image playback function, etc.).
  • the storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100 and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • a wireless connection is established by Bluetooth
  • the electronic device 100 can store the Bluetooth address of the peripheral device in the internal memory 121.
  • the peripheral device is a device including two main bodies, such as a TWS headset
  • the left and right earbuds of the TWS headset have respective Bluetooth addresses
  • the electronic device 100 may associate the Bluetooth addresses of the left and right earbuds of the TWS headset in the In the internal memory 121.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also known as "handset" is used to convert audio electrical signals into sound signals.
  • the voice can be received by bringing the receiver 170B close to the ear.
  • the microphone 170C also known as “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through the human mouth, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the TWS headset when the electronic device 100 establishes a wireless connection with a peripheral device 101 such as a TWS headset, the TWS headset can be used as an audio input/output device of the electronic device 100.
  • the audio module 170 may receive the audio electrical signal transmitted by the wireless communication module 160 to realize functions such as answering a phone call and playing music through a TWS headset.
  • the TWS headset can collect the user's voice signal, convert it into an audio electrical signal, and send it to the wireless communication module 160 of the electronic device 100.
  • the wireless communication module 160 transmits the audio electrical signal to the audio module 170.
  • the audio module 170 can convert the received audio electrical signal into a digital audio signal, encode it, and pass it to the mobile communication module 150. It is transmitted by the mobile communication module 150 to the call peer device to realize the call.
  • the application processor may transmit the audio electrical signal corresponding to the music played by the media player to the audio module 170.
  • the audio electrical signal is transmitted to the wireless communication module 160 by the audio module 170.
  • the wireless communication module 160 may send the audio electrical signal to the TWS headset, so that the TWS headset converts the audio electrical signal into a sound signal and plays it.
  • the headset interface 170D is used to connect wired headsets.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile electronic device (open mobile terminal) platform (OMTP) standard interface, and the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
  • OMTP open mobile electronic device
  • CTIA American Telecommunications Industry Association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the electronic device 100 determines the strength of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 100 in various directions (generally three axes).
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense the brightness of ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access to application lock, fingerprint photo taking, fingerprint answering call, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive a blood pressure beating signal.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement the heart rate detection function.
  • the key 190 includes a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 may generate a vibration prompt.
  • the motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the amount of power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the electronic device 100 interacts with the network through a SIM card to realize functions such as call and data communication
  • the CIG identification (CIG_ID) is used to identify the CIG.
  • CIG(1) and CIG(2) are used to represent different CIG.
  • a CIG can include multiple CISs.
  • the transmission channel between the source device and each destination device is defined as CIS.
  • Each destination device corresponds to a CIS.
  • the mobile phone 100 may configure a CIG for the left and right earplugs of the TWS earphone 101, and configure the CIG to include two CISs, such as CIS(1) and CIS(2).
  • the earplug 101-1 corresponds to CIS (1)
  • the earplug 101-2 corresponds to CIS (2).
  • Each CIS has a different identifier (CIS_ID). For example, CIS(1) and CIS(2) have different logos. Multiple CISs in the same CIG have a common CIG synchronization point and CIG playback point, which is used to achieve playback-level synchronization of audio data by multiple peripheral devices.
  • CIG(1) may include the CIG event (x) and the CIG event (x+1) shown in FIG. 6B, and so on.
  • Each CIG event belongs to an ISO interval (ISO_interval) in time.
  • the CIG event (x) is temporally attributed to the ISO interval between the CIG (x) anchor point and the CIG (x+1) anchor point
  • the CIG event (x+1) It belongs to the ISO interval from CIG(x+1) anchor point to CIG(x+2) anchor point in time.
  • the CIG anchor point is the starting time point of the corresponding CIG event.
  • the CIG(x) anchor point is the starting time point of the CIG event (x).
  • Each CIG event may include multiple CIS events (CIS_event).
  • CIG events (x) include CIS (1) events (x) and CIS (2) events (x)
  • CIG events (x+1) include CIS (1) events (x+1) And CIS(2) events (x+1).
  • Each CIS can include multiple CIS events.
  • CIS(1) may include the CIS(1) event (x) and the CIS(1) event (x+1) shown in FIG. 6B.
  • the CIS(2) may include the CIS(2) event (x) and the CIS(2) event (x+1) shown in FIG. 6B.
  • Each CIS event belongs to an ISO interval in time.
  • the CIS(1) event (x) belongs in time to the ISO interval between the CIS(1).x anchor point and the CIS(1).x+1 anchor point
  • CIS(2) The event (x) belongs to the ISO interval between the CIS(2).x anchor point and the CIS(2).x+1 anchor point in time
  • the CIS(1) event (x+1) belongs to the CIS in time (1) ISO interval from .x+1 anchor point to CIS(1).x+1 anchor point.
  • the ISO interval is the time between two consecutive CIS anchors.
  • Two consecutive CIS anchor points refer to two consecutive anchor points of the same CIS.
  • the CIS(1).x anchor point and CIS(1).x+1 anchor point are two consecutive anchor points of CIS(1).
  • the CIS anchor point is the starting time point corresponding to the CIS event.
  • the CIS(1).x anchor point is the starting time point of the CIS(1) event (x).
  • Each CIS can define NSE sub-events within an ISO interval. That is, each CIS event is composed of a number of subevents (number of subevents, NSE) subevents. Among them, NSE is greater than or equal to 1.
  • NSE number of subevents
  • NSE is greater than or equal to 1.
  • the NSE (ie, N1) of CIS (1) is equal to 2
  • the CIS (1) event (x) It is composed of sub-event (1_1) and sub-event (1_2)
  • NSE (ie N2) of CIS (2) is equal to 2
  • CIS (2) event (x) is composed of sub-event (2_1) and sub-event (2_2).
  • each sub-event is composed of one "M->S” and one "S->M".
  • “M->S” is used for the source device to send audio data to the destination device, and for the destination device to receive the audio data sent by the source device.
  • "S->M” is used for the destination device to send audio data to the source device, and the source device to receive audio data sent by the destination device.
  • "M->S” of CIS (1) is used for the mobile phone 100 to send audio data to the earplug 101-1, and for the earplug 101-1 to receive audio data sent by the mobile phone 100.
  • S->M of CIS (1) is used for the earplug 101-1 to send data (such as audio data or feedback information) to the mobile phone 100, and for the mobile phone 100 to receive the data sent by the earplug 101-1.
  • M->S of CIS (2) is used for the electronic device 1 to send audio data to the earphone 101-2 by the mobile phone 100, that is, for the earphone 101-2 to receive the audio data sent by the mobile phone 100.
  • S->M of CIS (2) is used for the earplug 101-2 to send data (such as audio data or feedback information) to the mobile phone 100, and for the mobile phone 100 to receive the data sent by the earplug 101-2.
  • the feedback information may be an acknowledgement (acknowledgement, ACK) or a negative acknowledgement (negative acknowledgement, NACK).
  • Each sub-event (Sub_event) belongs to a sub-interval (Sub_interval) in time.
  • the sub-interval of a CIS can be the time between the start time of one sub-event in the same CIS event and the start time of the next sub-event.
  • the subinterval of CIS(1) that is, CIS(1)_subinterval
  • the subinterval of CIS(2) may be a CIS(1) event
  • the sub-interval of CIS(2) (that is, CIS(2)_sub-interval) can be between the start time of the sub-event (2_1) in the CIS(2) event (x) and the start time of the sub-event (2_2) time.
  • the mobile phone 100 can determine the NSE according to the requirement of the audio data on the duty ratio of the ISO channel.
  • the protocol framework may include: an application layer, a host, a host controller interface (HCI), and a controller.
  • HCI host controller interface
  • the controller includes the link layer and the physical layer.
  • the physical layer is responsible for providing physical channels for data transmission.
  • the link layer includes ACL links and ISO channels.
  • the ACL link is used to transmit control messages between devices, such as content control messages (such as previous song, next song, etc.).
  • the ISO channel can be used to transmit isochronous data (such as audio data) between devices.
  • Host and Controller communicate via HCI.
  • the communication medium between Host and Controller is HCI instruction.
  • Host can be implemented in the application processor (AP) of the device, and Controller can be implemented in the Bluetooth chip of the device.
  • AP application processor
  • Host and Controller can be implemented in the same processor or controller, in which case HCI is optional.
  • the electronic device is the mobile phone 100
  • the first earplug of the TWS earphone is the earplug 101-1 of the TWS earphone 101
  • the second earplug is the earplug 101-2 of the TWS earphone 101 as an example for description.
  • the mobile phone 100 can be configured for the TWS headset 101 including two CISs (such as the first CIS and the second CIS) The first CIG.
  • the embodiment of the present application describes the process of configuring the first CIG and creating the CIS of the mobile phone 100 in conjunction with the BLE-based audio protocol framework shown in FIG. 7.
  • the configuration of the CIG and the creation of the CIS of the mobile phone 100 are performed on the premise that the mobile phone 100 and the earplug (the earplug 101-1 and/or the earplug 101-2) are already connected (Connection).
  • Both the mobile phone 100 and the earplug have a Host and a link layer LL (in a controller), and the Host and LL communicate through HCI. Among them, Host and LL of the earplug are not shown in FIG. 8.
  • the Host of the mobile phone 100 can set the CIG parameter of the first CIG through the HCI instruction.
  • CIG parameters are used to create isochronous data transmission channels (ie CIS).
  • ie CIS isochronous data transmission channels
  • the Host of the mobile phone 100 may set the CIG parameter of the first CIG according to the audio data and the LL of the mobile phone 100 in response to the service request of the application layer 901. For different audio data, the mobile phone 100 can set different CIG parameters.
  • the Host of the mobile phone 100 may send CIG parameter setting information to the LL of the mobile phone 100 through HCI.
  • the CIG parameter setting information may be the HCI instruction "LE Set CIG parameters”.
  • the LL of the mobile phone 100 can return the first confirmation information.
  • the first confirmation information may be a response message "Command Complete”.
  • the Host of the mobile phone 100 can initiate the creation of the CIS through the HCI instruction. Specifically, the Host of the mobile phone 100 can send the CIS creation information to the LL of the mobile phone 100 through HCI.
  • the CIS creation information may be the HCI instruction "LE Create CIS”.
  • the LL of the mobile phone 100 can return the second confirmation information.
  • the second confirmation information may be "HCI Command Status".
  • the mobile phone LL sends the fourth confirmation information (eg LL_CIS_IND message) to the first earbud LL, and the mobile phone LL sends the CIS connection established information (eg LECISEstablished message) to the mobile phone host ). So far, the CIS connection link has been established between the mobile phone and the first earbud.
  • the CIS connection response information eg LL_CIS_RSP message
  • the mobile phone LL sends the fourth confirmation information (eg LL_CIS_IND message) to the first earbud LL
  • the mobile phone LL sends the CIS connection established information (eg LECISEstablished message) to the mobile phone host ). So far, the CIS connection link has been established between the mobile phone and the first earbud.
  • the LL of the mobile phone 100 may request the earplug (such as the earplug 101-1) to create the CIS through a CIS request (such as LL_CSI_REQ).
  • the earplug eg, earplug 101-1) replies to the LL of the mobile phone 100 with a CIS response (eg, LL_CIS_RSP).
  • the LL of the mobile phone 100 sends a CIS confirmation (eg, LL_CIS_IND) to the earplug (eg, earplug 101-1).
  • the LL of the mobile phone can send the CIS connection established information (such as LE CIS Established message) to the host of the mobile phone, which is not shown in the drawings.
  • the establishment of the CIS between the mobile phone 100 and the earplug is completed.
  • the mobile phone 100 and the earplug can perform audio data transmission.
  • the mobile phone 100 can configure the TWS earphone 101 with the first CIG including two CISs.
  • the CIS activated by the mobile phone 100 and the TWS earphone 101 is not the same in the mono-ear state and the binaural state. specific:
  • the mobile phone 100 can activate the first CIS and the second CIS, transmit audio data through the first CIS and the earplug 101-1, and transmit audio data through the second CIS and the earplug 101-2.
  • the mobile phone 100 can deactivate one CIS and continue to use another CIS and the corresponding earplug for audio data transmission.
  • the mobile phone 100 activates only one CIS (such as the first CIS), and transmits audio data with the corresponding earplug (such as the first earplug) through the activated CIS. Among them, the mobile phone 100 does not activate another CIS (such as the second CIS). In this way, when the TWS earphone 101 is switched from the mono-ear state to the binaural state, the electronic device 100 can activate another CIS and use two CISs and two earplugs for audio data transmission.
  • the electronic device 100 does not need to reconfigure the ISO channel (ie, reconfigure the CIS), as long as the corresponding CIS is activated or deactivated. In this way, the transmission of audio data will not be interrupted, which can ensure the normal transmission of audio data and improve the user experience.
  • the lid of the earplug box 101-3 of the TWS earphone 101 can be opened.
  • the earplug 101-1 and the earplug 101-2 can be automatically paired and connected.
  • any earplug (such as the earplug 101-2) of the earplug 101-1 and the earplug 101-2 can send a paired broadcast to the outside.
  • the mobile phone 100 may receive the pairing broadcast and prompt the user that the relevant Bluetooth device (such as the earbud 101-2) has been scanned.
  • the mobile phone 100 can be paired with the earplug 101-2.
  • the earbud 101-2 can send the Bluetooth address of the mobile phone 100 to the earbud 101-1 through the Bluetooth connection with the earbud 101-1, and notify the earbud 101-1 to send a pairing broadcast to the outside .
  • the mobile phone 100 can receive the pairing broadcast sent by the earplug 101-1 and pair with the earplug 101-1.
  • the ACL link 1 can be established with the earplug 101-2.
  • the mobile phone 100 After the mobile phone 100 is paired with the earplug 101-1, it can establish an ACL link 2 with the earplug 101-1.
  • the earplug 101-2 may also send the MAC address of the earplug 101-1 to the mobile phone 100 to indicate to the mobile phone 100 that the earplug 101-1 and the earplug 101-2 are two main bodies of the same peripheral device (such as the TWS headset 101).
  • the mobile phone 100 can determine whether the TWS earphone 101 is in a mono-ear state or a binaural state in the following ways (1)-way (3).
  • the monaural state includes a first monaural state and a second monaural state. That is, the mobile phone 100 can execute S900 shown in FIG. 9 in the following manner (1)-method (3):
  • Method (1) Are both the earplug 101-1 and the earplug 101-2 taken out of the earplug box 101-3.
  • the user can take out one or two earplugs from the earplug box 101-3.
  • the earplug can detect that the earplug is taken out of the earplug box 101-3 through a sensor (such as an optical sensor or a touch sensor) or an electrical connector.
  • the mobile phone 100 can be instructed that the earplug is taken out of the earplug box 101-3.
  • take the earplug 101-1 as an example.
  • the earplug 101-1 may send a control command to the mobile phone 100 through the ACL link 1 to indicate that the earplug is taken out of the earplug box 101-3.
  • the mobile phone 100 may determine that the TWS earphone 101 is in a binaural state. That is, the two earplugs (earplug 101-1 and earplug 101-2) of the TWS earphone 101 are used together as the audio input/output device of the mobile phone 100.
  • the mobile phone 100 may determine that the TWS earphone 101 is in a mono-ear state. That is, one earplug (such as earplug 101-1) of the TWS earphone 101 is used alone as the audio input/output device of the mobile phone 100.
  • Method (2) Whether the earplug 101-1 and the earplug 101-2 are worn.
  • the earplug can be worn on the ear.
  • the earplug can detect whether the earplug is worn through a sensor (such as a light sensor or a bone sensor).
  • the mobile phone 100 can be instructed that the earplugs are worn.
  • the earplug 101-1 may send a control command to the mobile phone 100 through the ACL link 1 to indicate that the earplug is worn.
  • the user may use only one earplug to transmit audio data with the mobile phone 100. Based on this, in the embodiment of the present application, it can be determined whether the TWS earphone 101 is in a binaural state or a single ear state by judging whether both the earplug 101-1 and the earplug 101-2 are worn.
  • the mobile phone 100 may determine that the TWS earphone 101 is in a binaural state. That is, the two earplugs (earplug 101-1 and earplug 101-2) of the TWS earphone 101 are used together as the audio input/output device of the mobile phone 100.
  • the mobile phone 100 may determine that the TWS earphone 101 is in a mono-ear state. That is, one earplug (such as earplug 101-1) of the TWS earphone 101 is used alone as the audio input/output device of the mobile phone 100.
  • Method (3) Whether the earplug 101-1 and the earplug 101-2 are paired and connected.
  • the mobile phone 100 can determine whether the TWS earphone 101 is in a binaural state or a mono-ear state through whether the earplug 101-1 and the earplug 101-2 are paired and connected.
  • the user after the user takes out one earplug (such as earplug 101-1) from the earplug box 101-3, it will not continue to take out another earplug (such as earplug 101-2). Then, the user closes the earplug box 101-3. After the earplug box 101-3 is closed, the earplug 101-2 in the earplug box 101-3 and the earplug 101-1 outside the earplug box 101-3 are disconnected. That is, the two earplugs of the TWS earphone 101 are not paired and connected.
  • the earplugs 101-1 outside the earplug box 101-3 can indicate to the mobile phone 100 that the two earplugs are disconnected.
  • the earplug 101-1 may send a control command to the mobile phone 100 through the ACL link 1 to instruct the two earplugs to disconnect.
  • the mobile phone 100 may determine that the TWS earphone 101 is in a binaural state. That is, the two earplugs (earplug 101-1 and earplug 101-2) of the TWS earphone 101 are used together as the audio input/output device of the mobile phone 100.
  • the mobile phone 100 may determine that the TWS earphone 101 is in a single-ear state. That is, one earplug (such as earplug 101-1) of the TWS earphone 101 is used alone as the audio input/output device of the mobile phone 100.
  • the method for the mobile phone 100 to determine whether the TWS earphone 101 is in a mono-ear state or a binaural state includes but is not limited to the above method (1)-method (3).
  • the mobile phone 100 may determine whether the TWS earphone 101 is in a mono-ear state or a binaural state through whether the earplug 101-1 and the earplug 101-2 are both connected to the mobile phone.
  • the user After the user takes out the earplug 101-1 from the earplug box 101-3, the user will not continue to take out the earplug 101-2. Then, the user closes the earplug box 101-3. After the earplug box 101-3 is closed, the earplug 101-2 in the earplug box 101-3 is disconnected from the mobile phone 100. At this time, the mobile phone 100 may determine that the TWS earphone 101 is in a monaural state.
  • the CIS (1) and the CIS (2) are configured as a serial scheduling (Sequential) transmission method or an interleaved scheduling (Interleaved) transmission method.
  • the configured CIG parameters of the first CIG are different.
  • the TWS earphone 101 is in a binaural state, that is, the two earplugs (earplug 101-1 and earplug 101-2) of the TWS earphone 101 together serve as the audio input of the mobile phone 100 /Output device is used.
  • a method in which the CIS (1) and CIS (2) are configured as interleaved scheduling transmission modes is taken as an example to describe the method of the embodiment of the present application.
  • the mobile phone 100 may perform S901. Among them, the mobile phone 100 may configure the first CIG during the process of configuring the CIG by executing S801 shown in FIG. 8, and configure CIS(1) and CIS(2) as the interleaved scheduled transmission shown in (a) in FIG. 10 the way. Specifically, as shown in (a) in FIG. 10 or (b) in FIG.
  • the anchor point of CIS(1) (such as CIS(1).x anchor point) is the anchor point of the first CIG (such as CIG (x) anchor point); and CIS(2) anchor point (such as CIS(2).x anchor point) and the first sub-event (ie sub-event 1_1) in CIS(1) event (x)
  • the end point is the same.
  • the sub-interval of CIS(1) (such as CIS(1)_sub-interval) is different from the sub-interval of CIS(2) (such as CIS(2)_sub-interval).
  • the mobile phone 100 may create CIS(1) for the earplug 101-1 by performing S802 with the earplug 101-1.
  • the mobile phone 100 can create a CIS for the earplug 101-2 by performing S803 with the earplug 101-2 (2).
  • the step of creating the CIS(1) for the earphone 101-1 and the CIS(2) for the earphone 101-2 can be between S901-S902 shown in FIG. 9 (not shown in FIG. 9).
  • the mobile phone 100 may instruct the earplug 101-1 to activate CIS (1) and the earplug 101-2 to activate CIS (2) (ie, execute S902 shown in FIG. 9).
  • the mobile phone 100 may send an activation instruction to the earplug 101-1 through ACL (1) and an activation instruction to the earplug 101-2 through ACL (2).
  • the activation instruction is used to trigger the earplug 101-1 to activate CIS (1) and trigger the earplug 101-2 to activate CIS (2).
  • the mobile phone 100 can transmit audio data with the earplug 101-1 and the earplug 101-2 according to the interleaved scheduling transmission method shown in (a) in FIG. 10 (That is, execute S903 shown in FIG. 9).
  • the mobile phone 100 transmits audio data with the earplug 101-1 and the earplug 101-2 according to the interleaved scheduling transmission method shown in (a) of FIG. 10, that is, S903 shown in FIG. 9 may specifically It includes the following process (A) to process (D).
  • the mobile phone 100 starts from the CIS(1).x anchor point (that is, the CIG(x) anchor point), and "M->S" in the sub-event (1_1) of the CIS(1) event (x) Send audio data (such as audio data packet 1) to the earbud 101-1.
  • the earplug 101-1 may receive audio data (such as audio data packet 1) sent by the mobile phone 100 in "M->S” in the sub-event (1_1).
  • the earplug 101-1 sends the first data to the mobile phone 100 in "S->M” in the sub-event (1_1).
  • the mobile phone 100 receives the first data sent by the earplug 101-1 in "S->M” in the sub-event (1_1).
  • the first data may include: feedback information returned by the earplug 101-1 to the mobile phone 100; and/or audio data collected by a microphone (such as the microphone 160) in the earplug 101-1.
  • the feedback information may be the ACK or NACK of the audio data packet 1.
  • the mobile phone 100 starts from the CIS(2).x anchor point, and sends audio data to the earplug 101-2 by "M->S" in the sub-event (2_1) of the CIS(2) event (x) ( Such as audio data package 1).
  • the earplug 101-2 may receive the audio data (such as audio data packet 1) sent by the mobile phone 100 in "M->S” in the sub-event (2_1).
  • the earplug 101-2 sends the second data to the mobile phone 100 at "S->M” in the sub-event (2_1).
  • the mobile phone 100 receives the second data sent by the earplug 101-2 in "S->M” in the sub-event (2_1).
  • the second data may include: feedback information returned by the earplug 101-2 to the mobile phone 100; and/or audio data collected by a microphone (such as the microphone 160) in the earplug 101-2.
  • the feedback information may be the ACK or NACK of the audio data packet 1.
  • the mobile phone 100 sends audio data (such as audio data packet 2) to the earplug 101-1 in "M->S” in the sub-event (1_2).
  • the earplug 101-1 may receive audio data (eg, audio data packet 2) sent by the mobile phone 100 in "M->S” in the sub-event (1_2).
  • the earplug 101-1 sends the third data to the mobile phone 100 by "S->M” in the sub-event (1_2).
  • the mobile phone 100 receives the third data sent by the earplug 101-1 in "S->M” in the sub-event (1_2).
  • the third data may include: feedback information that the earplug 101-1 replies to the mobile phone 100; and/or audio data collected by a microphone (such as the microphone 160) in the earplug 101-1.
  • the feedback information may be the ACK or NACK of the audio data packet 2.
  • the mobile phone 100 transmits audio data (such as audio data packet 2) to the earplug 101-2 in "M->S” in the sub-event (2_2) of the CIS (2) event (x).
  • the earplug 101-2 can receive audio data (such as audio data packet 2) sent by the mobile phone 100 in "M->S” in the sub-event (2_2).
  • the earplug 101-2 sends the fourth data to the mobile phone 100 in "S->M” in the sub-event (2_2).
  • the mobile phone 100 receives the fourth data sent by the earplug 101-2 in "S->M” in the sub-event (2_2).
  • the fourth data may include: feedback information that the earplug 101-2 replies to the mobile phone 100; and/or audio data collected by a microphone (such as the microphone 160) in the earplug 101-2.
  • the feedback information may be the ACK or NACK of the audio data packet 2.
  • the left and right earplugs of the mobile phone 100 and the TWS earphone 101 can use the same transmission method as the CIG event (x) for audio data transmission in the CIG event (x+n).
  • n is greater than or equal to 1, and n is an integer.
  • the TWS earphone 101 may switch between single and binaural, that is, the TWS earphone 101 may be switched from the binaural state to the single ear state. That is, the mobile phone 100 can execute S904 shown in FIG. 9.
  • the mobile phone 100 may determine that the TWS earphone 101 is switched from the binaural state to the mono-ear state in the following ways (I)-way (IV). That is, the mobile phone 100 can execute S904 shown in FIG. 9 in the following ways (I)-(IV):
  • the mobile phone 100 determines that the TWS earphone 101 is in a binaural state.
  • the method (I) corresponds to the above method (1).
  • the mobile phone 100 can determine whether to switch to the single-ear state by judging whether the earplug 101-1 or the earplug 101-2 is put in the earplug box 101-3.
  • the mobile phone 100 may determine that the TWS earphone 101 is switched from the binaural state to the mono-ear state when either earplug 101-1 or earplug 101-2 is put in the earplug box 101-3. That is, one earplug (such as earplug 101-2) of the TWS earphone 101 is used alone as the audio input/output device of the mobile phone 100.
  • the earplug can detect that the earplug is put into the earplug box 101-3 through a sensor (such as an optical sensor or a touch sensor) or an electrical connector. After the earplug is put into the earplug box 101-3, the mobile phone 100 can be instructed that the earplug is put into the earplug box 101-3. For example, take the earplug 101-2 as an example. The earplug 101-2 may send a control command to the mobile phone 100 through the ACL link 2 to indicate that the earplug 101-2 is put into the earplug box 101-3.
  • a sensor such as an optical sensor or a touch sensor
  • Mode (II) corresponds to the above mode (2).
  • the mobile phone 100 can determine whether to switch to the single-ear state by determining whether the earplug 101-1 or the earplug 101-2 is worn. For example, the mobile phone 100 may determine that the TWS earphone 101 is switched from the binaural state to the mono-ear state when either earplug 101-1 or earplug 101-2 is not worn.
  • one earplug (such as earplug 101-2) of the TWS earphone 101 is used alone as the audio input/output device of the mobile phone 100.
  • the TWS earphone 101 will not be switched from the binaural state to the monoaural state.
  • a sensor such as a light sensor or a bone sensor
  • a sensor may be used to detect that the earplug is switched from the worn state to the unweared state.
  • a sensor such as a light sensor or a bone sensor
  • the earplug 101-2 are both worn.
  • the earplug 101-2 can detect that the earplug 101-2 is switched from the worn state to the unworn state through the sensor.
  • the earplug 101-2 may send a control command to the mobile phone 100 through the ACL link 2 to instruct the earplug 101-2 to switch from the worn state to the unworn state.
  • the method (III) corresponds to the above method (3).
  • the two earplugs of the TWS earphone 101 are paired and connected.
  • the two earplugs of the TWS earphone 101 are disconnected. Therefore, the mobile phone 100 can determine whether the TWS earphone 101 is switched from the binaural state to the mono-ear state by whether the earplug 101-1 and the earplug 101-2 are disconnected.
  • the user may stop using the two earplugs of the TWS headset 101 (earplug 101-1 and earplug 101-2) for some reason (such as an earplug issuing a low battery reminder).
  • One of the earplugs and put it in the earplug box 101-3.
  • a low battery reminder may be issued.
  • earplugs can issue low battery reminders through voice reminders or vibrations.
  • the earplug (such as earplug 101-1) is put into the earplug box 101-3, it can be disconnected from another earplug (such as earplug 101-2). That is, the two earplugs of the TWS earphone 101 are not paired and connected. After the two earplugs are disconnected, the earplugs outside the earplug box 101-3 (such as earplugs 101-2) can indicate to the mobile phone 100 that the two earplugs are disconnected. For example, the earplug 101-2 may send a control command to the mobile phone 100 through the ACL link 2 to instruct the two earplugs to disconnect.
  • the mobile phone 100 may determine that the TWS earphone 101 is switched from the binaural state to the monaural state. That is, one earplug (such as earplug 101-2) of the TWS earphone 101 is used alone as the audio input/output device of the mobile phone 100. Of course, if the earplug 101-1 and the earplug 101-2 are not disconnected, the mobile phone 100 may determine that the TWS earphone 101 will not switch to the mono-ear state.
  • the earplug can also send a control command to the mobile phone 100 through the ACL link to indicate that the power of the earplug is lower than the preset power Threshold. If the power of any earplug is lower than the preset power threshold, the mobile phone 100 may determine that the TWS earphone 101 is switched from the binaural state to the mono-ear state.
  • the mode (IV) may correspond to any one of the implementation modes in the above mode (1)-mode (3).
  • the method for the mobile phone 100 to determine whether the TWS earphone 101 is switched from the binaural state to the monaural state includes but is not limited to the above method (I)-mode (IV).
  • the mobile phone 100 may determine whether the TWS earphone 101 is switched from the binaural state to the mono-ear state through whether the earplug 101-1 or the earplug 101-2 is disconnected from the mobile phone.
  • the user may wear one earplug on the ear and use it for other users. However, during use, the user or other users may move the position, and the earplugs worn by the user may move accordingly.
  • the mobile phone 100 may be disconnected from the earplug. If the mobile phone 100 detects that an earplug (such as the earplug 101-1) is disconnected from the mobile phone 100, the mobile phone 100 may determine that the TWS earphone 101 is switched from the binaural state to the mono-ear state.
  • an earplug such as the earplug 101-1
  • the mobile phone 100 can continue to use the interleaved scheduling transmission method, through the CIS (1) and CIS (2) and the TWS headset 101
  • the two earbuds transmit audio data. That is, as shown in FIG. 9, if the TWS earphone 101 does not switch to the mono-ear state, the left and right earplugs of the mobile phone 100 and the TWS earphone 101 may continue to perform S903.
  • the TWS earphone 101 is switched from the binaural state to the monoaural state (such as the second monoaural state), for example, the earplug 101-2 is used in the second monoaural state, not used
  • the earplug 101-1 and the mobile phone 100 can execute S905 shown in FIG. 9 to deactivate CIS (1).
  • the mobile phone 100 deactivates CIS(1), it can stop transmitting audio data through the CIS(1) and the earplug 101-1.
  • the mobile phone 100 may continue to use the interleaved scheduling transmission method shown in (a) in FIG. 10 to transmit audio data through the CIS (2) and the earbud 101-2 (that is, execute S906 shown in FIG. 9).
  • the mobile phone 100 After switching to the monaural state (such as the second monaural state), the mobile phone 100 transmits audio data with the earplug 101-2 according to the interleaved scheduling transmission method shown in (a) in FIG. S906 shown in 9 may specifically include the above process (B) and process (D), but does not include the above process (A) and process (C).
  • the mobile phone 100 can only send audio data to the earplug 101-2 in "M->S” in the sub-event (2_1) shown in (a) in FIG. 10, and "S-> in the sub-event (2_1) M” receives the audio data sent by the earplug 101-2; “M->S” in the sub-event (2_2) shown in (a) in FIG.
  • the TWS earphone 101 is switched from the binaural state to the monaural state, or may be switched to the first monaural state.
  • the earplug 101-1 is used, and the earplug 101-2 is not used.
  • the mobile phone 100 can deactivate CIS (2). After the mobile phone 100 deactivates CIS (2), it can stop transmitting audio data through the CIS (2) and the earplug 101-2.
  • the mobile phone 100 can continue to use the interleaved scheduling transmission method shown in (a) of FIG. 10 to transmit audio data through the CIS (1) and the earplug 101-1 (not shown in FIG. 9). In this case, the mobile phone 100 may perform the above process (A) and process (C) with the earplug 101-1, and the mobile phone 100 will not continue to perform the above process (B) and process (D) with the earplug 101-2.
  • the mobile phone 100 can transmit audio data on the activated CIS only after the CIS is activated (send audio data on the corresponding CIS "M->S", "S->M” receives audio data).
  • the mobile phone 100 can transmit audio data on the activated CIS (receive audio data and send audio data at the corresponding "M->S" of the CIS).
  • the mobile phone 100 nor the earbuds will transmit audio data in the deactivated or inactivated CIS.
  • the CIS(1) and CIS(2) of the first CIG may also be configured as a serially scheduled transmission method.
  • serially scheduled transmission mode reference may be made to the description of other parts of the embodiments of the present application, which will not be repeated here.
  • the advantage of the interleaved scheduling transmission method is that the mobile phone 100 can use the sub-event (1_1) and sub-event (1_2) of the CIS (1), and CIS (2 )
  • the sub-event (2_1) and sub-event (2_2) are interleaved in time, that is, the audio data of CIS(1) and the audio data of CIS(2) can be interleaved in time for transmission, which can make the difference
  • the degree of interference of CIS is more equal, which can improve the anti-interference performance of audio data transmission.
  • the mobile phone 100 and the earplug 101-2 may receive the user's suspend operation during the process of performing S906. This suspend operation is used to trigger the TWS headset to pause playing audio data.
  • the method of the embodiment of the present application may further include S910.
  • the above suspending operation may be the earphone 101-2 as the output device of the mobile phone 100 performs S906 to play music, the user clicks on the “pause button” of the music playing interface displayed on the mobile phone 100 (such as a click operation);
  • the suspending operation may be a user's opening operation of the “silent button” of the mobile phone 100.
  • the “silent button” may be a physical button of the mobile phone 100.
  • the suspend operation may be a user's opening operation of the "mute button" of the mobile phone 100.
  • the above-mentioned suspending operation may be a click operation of the “hang-up button” of the voice communication interface displayed on the mobile phone 100 by the user during the voice communication of the earphone 101-2 as the input/output device of the mobile phone 100 performing S906 (such as Click Action).
  • the above-mentioned suspending operation may also be the user's first operation on the preset physical buttons on the earbuds 101-2 (such as a click operation, a long-press operation or Double-click operation).
  • the first operation of the preset physical button by the user is used to trigger the earplug 101-2 to pause playback and collect sound signals.
  • other operations such as the second operation
  • the preset physical button may trigger the earplug 101-2 to perform other events (for example, pairing with the earplug 101-1, disconnecting the earplug 101-1, etc.).
  • the mobile phone 100 may suspend transmission of audio data with the earbud 101-1.
  • the CIS transmission method configured by the mobile phone 100 as an earplug is not applicable to the current state of the TWS earphone 101 (ie, the monoaural state, As in the second monaural state).
  • the mobile phone 100 configures CIS(1) and CIS(2) as the interleaved scheduling transmission method is more suitable for the binaural state, which can make the interference degree of CIS(1) and CIS(2) more equal, and can improve audio data Anti-interference performance of transmission.
  • the monaural state such as the second monaural state
  • the earplug 101-2 is used as an input/output device of the mobile phone 100 alone. If the interleaved scheduling transmission method shown in (a) in FIG.
  • the mobile phone 100 will only transmit audio data with the earplug 101-2 during the sub-event (2_1) and sub-event (2_2), and the mobile phone 100 stops Audio data is transmitted with the earplug 101-1 in the sub-event (1_1) and the sub-event (1_2).
  • Audio data is transmitted with the earplug 101-1 in the sub-event (1_1) and the sub-event (1_2).
  • This idle time may be occupied by other transmissions (such as Wi-Fi), which may increase the possibility that the mobile phone 100 may interfere with the audio data transmitted by the earplug 101-2 during the sub-event (2_1) and sub-event (2_2).
  • the mobile phone 100 may re-execute S900 to determine whether the TWS earphone 101 is currently in a mono-ear or binaural state, and then execute S901 or S911 to configure the first CIG for the TWS earphone 101 according to the judgment result.
  • the mobile phone 100 can execute S911 to configure CIS(1) and CIS(2) as the serially scheduled transmission mode.
  • the audio data is suspended (ie, stopped).
  • the mobile phone 100 reconfigures CSI during the suspension of audio data, so that after the service restarts, the mobile phone 100 can transmit audio data through the reconfigured CIS. In this way, there will be no business interruption due to reconfiguration of CIS.
  • the TWS earphone 101 may also be switched to the binaural state again.
  • the method in the embodiment of the present application may further include S914.
  • the mobile phone 100 may activate CIS (1), that is, execute S907.
  • the mobile phone 100 can use the interleaved scheduling transmission method to transmit audio data through the CIS (1) and CIS (2) and the two earplugs of the TWS earphone 101 (that is, execute S903).
  • the method in the embodiments of the present application may further include S904-S906 and S910.
  • the TWS earphone 101 is in a monaural state (such as the first monaural state), that is, one earplug (such as the earplug 101-1) of the TWS earphone 101 serves as a mobile phone alone 100 audio input/output devices are used.
  • a monaural state such as the first monaural state
  • one earplug such as the earplug 101-1
  • the method of the embodiment of the present application will be described.
  • the mobile phone 100 may perform S911.
  • the mobile phone 100 can configure the first CIG during the process of configuring the CIG by executing S801 shown in FIG. 8, and configure the CIS(1) and CIS(2) as the serial scheduling shown in (a) in FIG. 12 transfer method. Specifically, as shown in (a) in FIG. 12 or (b) in FIG.
  • the anchor point of CIS(1) (such as CIS(1).x anchor point) is the anchor point of the first CIG (such as CIG (x) anchor point); and the CIS(2) anchor point (such as CIS(2).x anchor point) is the same as the end point of CIS(1) event (x).
  • the sub-interval of CIS(1) (such as CIS(1)_sub-interval) is the same as the sub-interval of CIS(2) (such as CIS(2)_sub-interval).
  • the mobile phone 100 may create CIS(1) for the earplug 101-1 by performing S802 with the earplug 101-1.
  • the mobile phone 100 can create a CIS for the earplug 101-2 by performing S803 with the earplug 101-2 (2).
  • the step of creating the CIS(1) for the earphone 101-1 and the CIS(2) for the earphone 101-2 may be between S911-S912 shown in FIG. 9 (not shown in FIG. 9).
  • the mobile phone 100 may instruct the earplug 101-1 to activate CIS(1), but will not instruct the earplug 101-2 to activate CIS(2) (ie, execute S912 shown in FIG. 9).
  • the mobile phone 100 may send an activation instruction to the earplug 101-1 through ACL(1).
  • the activation instruction is used to trigger the earplug 101-1 to activate the CIS (1).
  • the mobile phone 100 will not send an activation instruction to the earplug 101-2, so that the earplug 101-2 will not activate the CIS (2).
  • the mobile phone 100 can transmit audio data with the earbud 101-1 according to the serially scheduled transmission method shown in (a) in FIG. 12 (that is, execute S913 shown in FIG. 9 ).
  • the mobile phone 100 transmits audio data with the earplug 101-1 according to the serially scheduled transmission method shown in (a) in FIG. 12, which is shown in FIG. 9.
  • S913 may specifically include the following process (a) to process (b).
  • the mobile phone 100 starts from the CIS(1).x anchor point (that is, the CIG(x) anchor point), and "M->S" in the sub-event (1_1) of the CIS(1) event (x) Send audio data (such as audio data packet 1) to the earbud 101-1.
  • the earplug 101-1 may receive audio data (such as audio data packet 1) sent by the mobile phone 100 in "M->S” in the sub-event (1_1).
  • the earplug 101-1 sends the first data to the mobile phone 100 in "S->M” in the sub-event (1_1).
  • the mobile phone 100 receives the first data sent by the earplug 101-1 in "S->M” in the sub-event (1_1).
  • the first data may include: feedback information returned by the earplug 101-1 to the mobile phone 100; and/or audio data collected by a microphone (such as the microphone 160) in the earplug 101-1.
  • the feedback information may be the ACK or NACK of the audio data packet 1.
  • the mobile phone 100 sends audio data (such as audio data packet 2) to the earplug 101-1 in "M->S” in the sub-event (1_2).
  • the earplug 101-1 may receive audio data (eg, audio data packet 2) sent by the mobile phone 100 in "M->S” in the sub-event (1_2).
  • the earplug 101-1 sends the third data to the mobile phone 100 by "S->M” in the sub-event (1_2).
  • the mobile phone 100 receives the third data sent by the earplug 101-1 in "S->M” in the sub-event (1_2).
  • the third data may include: feedback information that the earplug 101-1 replies to the mobile phone 100; and/or audio data collected by a microphone (such as the microphone 160) in the earplug 101-1.
  • the feedback information may be the ACK or NACK of the audio data packet 2.
  • the mobile phone 100 in the mono-ear state (such as the first mono-ear state, that is, when the earplug 101-1 is used as the input/output device of the mobile phone 100), the mobile phone 100 will not be shown in (a) in FIG.
  • the sub-event (2_1) and the sub-event (2_2) audio data is transmitted with the earplug 101-2. That is, S913 shown in FIG. 9 does not include the following process (c) and process (d).
  • the mobile phone 100 and the earplug 101-1 can use the same transmission method as the CIG event (x) for audio data transmission in the CIG event (x+n).
  • n is greater than or equal to 1, and n is an integer.
  • the method for transmitting audio data in the CIG event (x+n) between the mobile phone 100 and the earplug 101-1 reference may be made to the method for transmitting audio data in the CIG event (x), which will not be repeated here in the embodiments of the present application.
  • the TWS earphone 101 may switch between mono and binaural, that is, the TWS headset 101 may be switched from the mono ear state (such as the first mono ear state) to the binaural state . That is, the mobile phone 100 can execute S914 shown in FIG. 9.
  • the mobile phone 100 may determine that the TWS earphone 101 is switched from the mono-ear state to the binaural state in the following ways (i)-way (iii). That is, the mobile phone 100 can perform S914 shown in FIG. 9 in the following manner (i)-method (iii):
  • the mobile phone 100 determines that the TWS earphone 101 is in a binaural state; if an earplug (such as the earplug 101-1) is taken When the earplug box 101-3 is ejected, and the other earplug (such as the earplug 101-2) is not taken out of the earplug box 101-3, the mobile phone 100 determines that the TWS earphone 101 is in a mono-ear state.
  • the mode (i) corresponds to the above mode (1).
  • the mobile phone 100 can determine whether to switch to the binaural state by judging whether the earplug 101-2 is taken out of the earplug box 101-3. For example, the mobile phone 100 may determine that the TWS earphone 101 is switched from the mono-ear state to the binaural state after the ear plug 101-2 is taken out of the ear plug box 101-3. That is, the two earplugs (such as earplugs 101-2) of the TWS earphone 101 are used together as the audio input/output device of the mobile phone 100.
  • the mobile phone 100 determines that the TWS earphone 101 is in a binaural state; if only one earplug (such as the earplug 101-1) is worn, the mobile phone 100 determines The TWS earphone 101 is in a monaural state.
  • the mode (ii) corresponds to the above mode (2).
  • the mobile phone 100 can determine whether to switch to the binaural state by judging whether the earplug 101-2 is switched from the unworn state to the worn state.
  • the mobile phone 100 may determine that the TWS earphone 101 is switched from the mono-ear state to the binaural state when the earplug 101-2 is worn. That is, the two earplugs (such as earplugs 101-2) of the TWS earphone 101 are used together as the audio input/output device of the mobile phone 100.
  • the method (iii) corresponds to the above method (3).
  • the mobile phone 100 can determine whether the TWS earphone 101 is switched from the mono-ear state to the binaural state through whether the ear plug 101-1 and the ear plug 101-2 are paired and connected.
  • the method for the mobile phone 100 to determine whether the TWS earphone 101 is switched from the mono-ear state to the binaural state includes but is not limited to the above method (i)-mode (iii).
  • the mobile phone 100 can determine whether the TWS earphone 101 is switched from the mono-ear state to the binaural state by determining whether the earbud 101-2 has established a connection with the mobile phone.
  • the mobile phone 100 can continue to use the serially scheduled transmission method to transmit audio data through the CIS (1) and the earbud 101-1 . That is, as shown in FIG. 9, if the TWS earphone 101 does not switch to the binaural state, the left and right earplugs of the mobile phone 100 and the TWS earphone 101 may continue to perform S913.
  • the mobile phone 100 may execute S915 shown in FIG. 9 to activate CIS (2). After the mobile phone 100 activates CIS (2), audio data can be transmitted through the CIS (2) and the earplug 101-2. In addition, the mobile phone 100 can continue to use the serially scheduled transmission method shown in (a) in FIG. 12 to transmit audio data through the CIS (1) and the earplug 101-1 (that is, execute S916 shown in FIG. 9).
  • the mobile phone 100 After switching to the binaural state, the mobile phone 100 transmits audio data with the two earplugs of the TWS earphone according to the serially scheduled transmission method shown in (a) in FIG. 12, that is, S916 shown in FIG. 9 Specifically, it may include the above process (a) and process (b), and the following process (c) and process (d).
  • the mobile phone 100 starts from the CIS(2).x anchor point, and sends audio data to the earplug 101-2 by "M->S" in the sub-event (2_1) of the CIS(2) event (x) ( Such as audio data package 1).
  • the earplug 101-2 may receive the audio data (such as audio data packet 1) sent by the mobile phone 100 in "M->S” in the sub-event (2_1).
  • the earplug 101-2 sends the second data to the mobile phone 100 at "S->M” in the sub-event (2_1).
  • the mobile phone 100 receives the second data sent by the earplug 101-2 in "S->M” in the sub-event (2_1).
  • the second data may include: feedback information returned by the earplug 101-2 to the mobile phone 100; and/or audio data collected by a microphone (such as the microphone 160) in the earplug 101-2.
  • the feedback information may be the ACK or NACK of the audio data packet 1.
  • the mobile phone 100 transmits audio data (such as audio data packet 2) to the earplug 101-2 in "M->S” in the sub-event (2_2) of the CIS (2) event (x).
  • the earplug 101-2 can receive audio data (such as audio data packet 2) sent by the mobile phone 100 in "M->S” in the sub-event (2_2).
  • the earplug 101-2 sends the fourth data to the mobile phone 100 in "S->M” in the sub-event (2_2).
  • the mobile phone 100 receives the fourth data sent by the earplug 101-2 in "S->M” in the sub-event (2_2).
  • the fourth data may include: feedback information that the earplug 101-2 replies to the mobile phone 100; and/or audio data collected by a microphone (such as the microphone 160) in the earplug 101-2.
  • the feedback information may be the ACK or NACK of the audio data packet 2.
  • the mobile phone 100 can transmit audio data with the earplug 101-1 in the sub-event (1_1) and sub-event (1_2) shown in (a) in FIG. 12, Audio data is transmitted with the earplug 101-1 in the sub-event (2_1) and sub-event (2_2) shown in (a) in FIG. 12.
  • the mobile phone 100 determines that the TWS earphone 101 is in a mono-ear state, and the CIS(1) and CIS(2) of the first CIG may also be configured as an interleaved scheduling transmission method.
  • the transmission mode of the interleaved scheduling reference may be made to the description of other parts of the embodiments of the present application, which will not be repeated here.
  • the advantage of the serial scheduling transmission method is that the mobile phone 100 can be in continuous time (such as sub-event (1_1) and sub-event (1_2) continuous in time).
  • An earplug (such as earplug 101-1) transmits audio data. In this way, the degree of interference of CIS can be reduced, and the anti-interference performance of audio data transmission can be improved.
  • the serially scheduled transmission method can be used to spare a long continuous time (such as the time corresponding to the sub-event (2_1) and sub-event (2_2)) for other transmissions (such as Wi-Fi )use. In this way, mutual interference caused by frequent switching between Wi-Fi and Bluetooth to use transmission resources can be reduced.
  • the mobile phone 100 may receive the user's suspend operation during the process of performing S906 by the mobile phone 100 and the earbud 101-2.
  • the suspend operation For a detailed description of the suspend operation, reference may be made to the related content in the foregoing embodiments, and the embodiments of the present application will not repeat them here.
  • the mobile phone 100 may pause transmission of audio data with the earplug 101-1 and the earplug 101-2.
  • the CIS transmission method configured by the mobile phone 100 for the earplugs is not suitable for the current scene (such as the binaural state).
  • the mobile phone 100 configures the CIS (1) and CIS (2) as a serially scheduled transmission method, which is more suitable for the single-ear state, and can transmit audio data with one earplug in continuous time, improving the anti-interference performance of audio data transmission .
  • the mobile phone 100 is in the CIS(1) event (x) (that is, sub-event (1_1 ) And sub-event (1_2)) and the earplug 101-1 after transmitting the audio data, then the CIS(2) event (x) (ie, sub-event (2_1) and sub-event (2_2)) and the earplug 101-2 transmit audio data .
  • the degree of interference between CIS(1) and CIS(2) may be quite different.
  • the mobile phone 100 may re-execute S900 to determine whether the TWS earphone 101 is currently in a mono-ear or binaural state, and then execute S901 or S911 to configure the first CIG for the TWS earphone 101 according to the judgment result.
  • the mobile phone 100 can execute S911 to configure CIS(1) and CIS(2) as the interleaved scheduling transmission mode.
  • the mobile phone 100 after the mobile phone 100 is configured with CIS (that is, the audio data transmission method is configured), no matter how the state of the TWS headset 101 is switched (such as switching from the mono-ear state to the binaural state, or by the binaural The state is switched to the monaural state), the mobile phone 100 will transmit audio data with the TWS earphone 101 according to the configured transmission mode (such as serially scheduled transmission mode or interleaved scheduling transmission mode) until the audio data ends. That is to say, the transmission method of the audio data will not change, and the audio data will not be interrupted by the single and binaural switching of the TWS earphone 101.
  • the configured transmission mode such as serially scheduled transmission mode or interleaved scheduling transmission mode
  • the mobile phone 100 can reconfigure the CSI during the audio data suspension process. In this way, after the service restarts, the mobile phone 100 can transmit audio data through the reconfigured CIS. In this way, there will be no business interruption due to reconfiguration of CIS.
  • the reconfigured transmission method is more suitable for the current state of the TWS earphone 101 (such as the mono-ear state or the binaural state), which can improve the transmission efficiency of audio data.
  • the TWS earphone 101 can also be switched back to the mono-ear state.
  • the method of the embodiment of the present application may further include S904.
  • the mobile phone 100 may deactivate CIS (2), that is, execute S917.
  • the mobile phone 100 may adopt a serially scheduled transmission method to transmit audio data through the CIS (1) and the earbud 101-1 (that is, execute S913).
  • the method in the embodiments of the present application may further include S914-S916 and S910.
  • the audio data sent by the mobile phone 100 to the earplug 101-1 and the earplug 101-2 may be different. Take the earplug 101-1 as the left earplug and the earplug 101-2 as the right earplug for example.
  • the mobile phone 100 transmits the audio data of the left channel to the earplug 101-1 and the audio data of the right channel to the earplug 101-2.
  • the earplug 101-1 plays audio data of the left channel
  • the earplug 101-2 plays audio data of the right channel. That is, the earplug 101-1 and the earplug 101-2 are combined to play stereo audio data.
  • Case 1 the mobile phone 100 may separately encode the audio data that needs to be sent to the left and right earplugs (that is, encode left and right channels).
  • the audio data sent by the mobile phone 100 to the earplug 101-1 and the earplug 101-2 may be the same.
  • the audio data transmitted from the mobile phone 100 to the earplug 101-1 and the earplug 101-2 are both mono audio data.
  • the earplug 101-1 and the earplug 101-2 can play mono audio data.
  • the mobile phone 100 can mono-encode the audio data to be sent to the left and right earplugs.
  • the mobile phone 100 In the mono-ear state, in order to improve the user's listening experience, the mobile phone 100 cannot use left and right channel encoding, and cannot send audio data that has undergone left channel encoding or right channel encoding to the earbud being used.
  • the mobile phone 100 can perform mono coding of audio data, and the earplug can play mono audio data.
  • the mobile phone 100 needs to switch the encoding method from the left and right channel encoding to the mono encoding. Similarly, if the TWS earphone 101 is switched from the monaural state to the binaural state corresponding to the above case (1), the mobile phone 100 needs to switch the encoding method from mono coding to left and right channel coding.
  • the TWS headset 101 undergoes a mono-binaural switch, for example, from the binaural state to the mono-ear state, or from the mono-ear state to the binaural state, the mobile phone 100 does not need to change the encoding the way.
  • the mobile phone 100 can transmit the same audio data to the left and right earplugs of the TWS earphone 101 at different times, respectively.
  • the mobile phone 100 transmits the audio data packet 1 to the earplug 101-1 at "M->S" in the sub-event (1_1) shown in (a) in FIG. 10 or (a) in FIG.
  • the mobile phone 100 transmits the audio data packet 1 to the earplug 101-2 at "M->S" in the sub-event (2_1) shown in (a) of FIG. 10 or (a) of FIG.
  • the mobile phone 100 repeatedly transmits the same audio data in different time periods, which will result in a waste of transmission resources and reduce the effective utilization rate of the transmission resources.
  • the above-mentioned CIS(1) and CIS(2) may be configured as a jointly scheduled transmission mode.
  • the TWS earphone 101 is in a binaural state, that is, the two earplugs (earplug 101-1 and earplug 101-2) of the TWS earphone 101 are used together as an audio input/output device of the mobile phone 100.
  • a method in which the CIS (1) and CIS (2) are configured as a jointly scheduled transmission mode is taken as an example to describe the method of the embodiment of the present application.
  • the mobile phone 100 may configure the first CIG, and configure CIS(1) and CIS(2) as the jointly scheduled transmission method shown in (a) in FIG. 14.
  • the anchor points of CIS(1) such as CIS(1).x anchor points
  • the anchor points of CIS(2) such as CIS(2).x anchor points
  • the sub-interval of CIS(1) is the same as the sub-interval of CIS(2) (such as CIS(2)_sub-interval).
  • the mobile phone 100 may create CIS(1) for the earplug 101-1 and CIS(2) for the earplug 101-2. Finally, the mobile phone 100 may instruct the earplug 101-1 to activate CIS (1) and the earplug 101-2 to activate CIS (2). After CIS(1) and CIS(2) are activated, the mobile phone 100 can transmit audio data with the earplug 101-1 and the earplug 101-2 according to the jointly scheduled transmission method shown in (a) in FIG. 14 .
  • the mobile phone 100 may transmit audio data with the earplug 101-1 and the earplug 101-2 according to the jointly scheduled transmission method shown in FIG. 14(a).
  • the method may include the following process (1) to process ( six).
  • the mobile phone 100 starts from the CIS(1).x anchor point (that is, the CIS(2).x anchor point), and performs frequency hopping on the sub-events (1_1) of the CIS(1) event (x) and "M->S" in the sub-event (2_1) of the CIS(2) event (x) (that is, the bold "M->S” in (a) in Figure 14) sends audio data (such as audio data packets 1).
  • the earplug 101-1 can receive the transmission of the mobile phone 100 in a frequency hopping manner in the "M->S" (that is, the bold "M->S") in the sub-event (1_1) shown in (a) in FIG. 14 ⁇ 1 ⁇ Audio packet 1.
  • the earplug 101-2 can receive the transmission from the mobile phone 100 in a frequency hopping manner in the "M->S" (that is, the bold "M->S") in the sub-event (2_1) shown in (a) in FIG. 14 ⁇ 1 ⁇ Audio packet 1.
  • the earplug 101-1 may send the first data to the mobile phone 100 in "S->M” ("S->M” without solid lines in the solid line) in the sub-event (1_1).
  • the mobile phone 100 may receive the first data sent by the earplug 101-1 in "S->M” in the sub-event (1_1).
  • the earplug 101-2 may send the third data to the mobile phone 100 in "S->M" (dashed “S->M") in the sub-event (2_1).
  • the mobile phone 100 may receive the third data sent by the earplug 101-2 in "S->M” in the sub-event (2_1).
  • the mobile phone 100 transmits audio data (such as audio data) in "M->S" (that is, bold “M->S") in the sub-event (1_2) and the sub-event (2_2) in a frequency hopping manner Package 2).
  • the earplug 101-1 can receive the audio data packet 2 sent by the mobile phone 100 in a frequency hopping manner in "M->S” in the sub-event (1_2) shown in (a) in FIG. 14.
  • the earplug 101-2 can receive the audio data packet 2 sent by the mobile phone 100 in a frequency hopping manner in "M->S” in the sub-event (2_2) shown in (a) in FIG. 14.
  • the earplug 101-1 may send the second data to the mobile phone 100 in "S->M” ("S->M" without solid lines) in the sub-event (1_2).
  • the mobile phone 100 may receive the second data sent by the earplug 101-1 in "S->M” in the sub-event (1_2).
  • the earplug 101-2 may send the fourth data to the mobile phone 100 in "S->M" (dashed line “S->M") in the sub-event (2_2).
  • the mobile phone 100 may receive the fourth data sent by the earplug 101-2 in "S->M” in the sub-event (2_2).
  • the mobile phone 100 may deactivate CIS (1). After the mobile phone 100 deactivates CIS(1), it can stop transmitting audio data through the CIS(1) and the earplug 101-1. In addition, the mobile phone 100 can continue to use the joint scheduling transmission method shown in (a) in FIG. 14 to transmit audio data through the CIS (2) and the earplug 101-2.
  • the method for the mobile phone 100 to transmit audio data according to the jointly scheduled transmission method shown in (a) of FIG. 14 and the earbud 101-2 may include the above process (1) and process (3 ), process (4) and process (6), excluding process (2) and process (5). And, in the process (a) and the process (four), the earplug 101-1 will not jump in the "M->S" (that is, the bold "M->S") shown in (a) in FIG. 14 The audio data packet sent by the mobile phone 100 is received in a frequent manner.
  • the mobile phone 100 can be frequency-hopped at the same time point (ie, CIS(1).x anchor point and CIS(2).x anchor point, CIS(1).x anchor point and CIS(2 ).x anchor point is the same) send audio data packet.
  • the left and right earplugs of the TWS earphone 201 can also receive audio data packets in the same "M->S" in a frequency hopping manner. In this way, the mobile phone 100 will not repeatedly transmit the same audio data in different time periods, which can reduce the waste of transmission resources and improve the effective utilization rate of the transmission resources.
  • the electronic device may include one or more processors; a memory; and one or more computer programs.
  • the foregoing devices may be connected through one or more communication buses.
  • the one or more computer programs are stored in the above-mentioned memory and are configured to be executed by the one or more processors.
  • the one or more computer programs include instructions, and the above instructions may be used to execute 9.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules or units is only a division of logical functions.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical, or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each of the embodiments of this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of this embodiment essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium
  • several instructions are included to enable a computer device (which may be a personal computer, server, or network device, etc.) or processor to perform all or part of the steps of the methods described in the various embodiments.
  • the foregoing storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Headphones And Earphones (AREA)
  • Telephone Function (AREA)
  • Stereophonic System (AREA)

Abstract

本申请实施例提供应用于TWS耳机单双耳切换的音频数据传输方法及设备,涉及短距离通信技术领域,可以在单双耳切换时保证音频数据的正常传输。具体方案包括:电子设备通过第一CIG的第一CIS与TWS耳机的第一耳塞传输音频数据,通过第一CIG的第二CIS与TWS耳机的第二耳塞传输音频数据;确定TWS耳机由双耳状态(即第一耳塞和第二耳塞一起作为电子设备的音频输入/输出设备被使用的状态)切换为第一单耳状态(即第一耳塞单独作为电子设备的音频输入/输出设备被使用的状态);响应于上述确定,电子设备去激活第二CIS,停止通过第二CIS与第二耳塞传输音频数据,并继续通过第一CIS与第一耳塞传输音频数据。

Description

应用于TWS耳机单双耳切换的音频数据传输方法及设备 技术领域
本申请实施例涉及短距离通信技术领域,尤其涉及一种音频数据传输方法及电子设备。
背景技术
源端设备可以通过低功耗蓝牙(blue tooth low energy,BLE)的等时(isochronous,ISO)信道(channel)向一个或多个目的端设备传输音频数据(音频流,stream)。例如,手机可以通过BLE的ISO信道向真无线立体声(true wireless stereo,TWS)耳机的左右耳塞传输音频数据。其中,TWS耳机包含两个耳机主体,如,分别称为左耳塞和右耳塞,且左右耳塞之间不需要线材的连接。
其中,TWS耳机的左右耳塞可以作为一个手机的音频输入/输出设备,一起使用(称为双耳状态)实现音乐播放或者语音通信等功能。在双耳状态下,需要实现音频数据的播放级同步,即左右耳塞需要同时播放接收到的音频数据。
当然,TWS耳机的左耳塞或右耳塞中的任一个可以作为一个手机的音频输入/输出设备,单独使用(称为单耳状态)实现音乐播放或者语音通信等功能。在单耳状态下,不要求实现音频数据的播放级同步。
其中,由于上述单耳状态和双耳状态下,对左耳塞和右耳塞同步播放音频数据的要求不同;因此,在单耳状态和双耳状态下,手机对ISO信道进行不同的配置。如此,TWS耳机在进行耳塞的单双耳切换(如由单耳状态切换为双耳状态,或者双耳状态切换为单耳状态)时,手机则需要重新配置ISO信道。
但是,如果在音乐播放或者语音通信的过程中发生单双耳切换,手机重新配置ISO信道则会导致音频数据的中断,影响用户体验。
发明内容
本申请实施例提供一种音频数据传输方法及电子设备,可以在单双耳切换时保证音频数据的正常传输。
第一方面,本申请实施例提供一种音频数据传输方法,电子设备在为TWS耳机的耳塞配置CIS时,无论是单耳状态或者双耳状态,都可以为TWS耳机配置包括两个等时音频流(connected isochronous stream,CIS)(如第一CIS和第二CIS)的第一基于连接的等时流组(connected isochronous group,CIG)。这样,在双耳状态下,电子设备可以激活这两个CIS,然后通过这两个CIS与TWS耳机的左右耳塞进行音频数据传输。而在单耳状态下,电子设备可以仅激活一个CIS,并通过该CIS与对应的耳塞进行音频数据传输。如果在音乐播放或者语音通信的过程中发生单双耳切换,电子设备不需要重新配置CIS,只要激活或者去激活对应的CIS即可。这样,则不会引发音频数据的传输中断,可以保证音频数据的正常传输,提升用户体验。
结合第一方面,在一种可能的设计方式中,在双耳状态下,电子设备可以激活第一CIS和第二CIS,通过第一CIS与第一耳塞传输音频数据,通过第二CIS与第二耳 塞传输音频数据。如此,当TWS耳机由双耳状态切换为单耳状态时,电子设备便可以去激活一个CIS,继续使用另一个CIS与对应的耳塞进行音频数据传输。
结合第一方面,在一种可能的设计方式中,在单耳状态下,电子设备仅激活一个CIS(如第一CIS),通过激活的CIS与对应的耳塞(如第一耳塞)传输音频数据。其中,电子设备不激活另一个CIS(如第二CIS)。如此,当TWS耳机101由单耳状态切换为双耳状态时,电子设备100便可以激活另一个CIS,使用两个CIS与两个耳塞进行音频数据传输。
结合第一方面,在一种可能的设计方式中,电子设备配置第一CIG时,上述第一CIS和第二CIS被配置为串行调度(Sequential)的传输方式或者交织调度(Interleaved)的传输方式。其中,串行调度的传输方式和交织调度的传输方式中,配置的第一CIG的CIG参数不同。
结合第一方面,在一种可能的设计方式中,在双耳状态下,上述第一CIS和第二CIS被配置为交织调度的传输方式。在交织调度的传输方式中,第一CIS的锚点是第一CIG的CIG锚点,第二CIS的锚点与第一CIS的CIS事件中的第一个子事件的结束点相同,第一CIS的第二个子事件的起始点是第二CIS的第一个子事件的结束点。
其中,第一CIS和第二CIS均包括多个CIS事件;第一CIG包括多个CIG事件;每个CIG事件包括第一CIS的一个CIS事件和第二CIS的一个CIS事件;第一CIS的每个CIS事件中包括N1个子事件,N1大于或者等于2;第二CIS的每个CIS事件中包括N2个子事件,N2大于或者等于2。
其中,电子设备从第一CIS的锚点开始通过第一CIS与第一耳塞传输音频数据,电子设备从第二CIS的锚点开始通过第二CIS与第二耳塞传输音频数据。
当然,在双耳状态下,上述第一CIS和第二CIS被配置为串行调度的传输方式。其中,串行调度的传输方式的详细介绍可以参考本申请实施例其他部分的描述,这里不予赘述。
在双耳状态下,相比于串行调度的传输方式,交织调度的传输方式的优势在于:电子设备可以采用将第一CIS的子事件于第二CIS的子事件在时间进行交织排布,即可以将第一CIS的音频数据和第二CIS的音频数据在时间进行交织排布进行传输,这样可以使不同的CIS受干扰的程度更加均等,可以提升音频数据传输的抗干扰性能。
结合第一方面,在一种可能的设计方式中,在单耳状态下,上述第一CIS和第二CIS被配置为串行调度的传输方式。在串行调度的传输方式中,第一CIS的锚点是第一CIG的CIG锚点,第二CIS的锚点与第一CIS的CIS事件的结束点相同。
当然,在单耳状态下,上述第一CIS和第二CIS被配置为交织调度的传输方式。其中,交织调度的传输方式的详细介绍可以参考本申请实施例其他部分的描述,这里不予赘述。
在单耳状态下,相比于交织调度的传输方式,串行调度的传输方式的优势在于:电子设备可以在连续的时间(如第一CIS的一个CIS事件中的所有子事件在时间上连续)与一个耳塞(如第一耳塞)传输音频数据。这样,可以降低CIS受干扰的程度,可以提升音频数据传输的抗干扰性能。
并且,在单耳状态下,采用串行调度的传输方式,可以将较长的连续时间空闲出来 留给其他传输(如无线保真(Wireless Fidelity,Wi-Fi))使用。这样,可以减少Wi-Fi与蓝牙频繁切换使用传输资源而带来的相互干扰。
可选的,电子设备配置第一CIG时,上述第一CIS和第二CIS被配置为联合调度的传输方式。在双耳状态下,采用联合调度的传输方式,可以避免上述串行调度或者交织调度的传输方式重,电子设备在不同的时间可以向TWS耳机的左右耳塞分别传输相同的音频数据的问题,可以降低对传输资源的浪费,提升传输资源的有效利用率。
在联合调度的传输方式中,第一CIS的锚点和第二CIS的锚点均为第一CIG的CIG锚点。第一CIG包括多个CIG事件;第一CIG的CIG锚点是CIG事件的开始时间点。
第二方面,本申请实施例提供一种音频数据传输方法,可以用于电子设备与TWS耳机的音频数据传输,该TWS耳机包括第一耳塞和第二耳塞。其中,电子设备可以通过第一CIG的第一CIS与第一耳塞传输音频数据,通过第一CIG的第二CIS与第二耳塞传输音频数据。此时,TWS耳机处于双耳状态,即第一耳塞和第二耳塞一起作为电子设备的音频输入/输出设备被使用的状态。如果TWS耳机由双耳状态切换为第一单耳状态,电子设备则可以去激活所述第二CIS,停止通过第二CIS与第二耳塞传输音频数据,并继续通过第一CIS与第一耳塞传输音频数据。其中,第一单耳状态为第一耳塞单独作为电子设备的音频输入/输出设备被使用的状态。
当然,在TWS耳机处于双耳状态的情况下,TWS耳机也可能会由双耳状态切换为第二单耳状态。第二单耳状态为第二耳塞单独作为电子设备的音频输入/输出设备被使用的状态。此时,电子设备则可以去激活所述第一CIS,停止通过第一CIS与第一耳塞传输音频数据,并继续通过第二CIS与第二耳塞传输音频数据。
本申请实施例中,当TWS耳机由双耳状态切换为单耳状态(如第一单耳状态)时,电子设备可以去激活不被使用的耳塞对应的CIS(如第二CIS),而不是重新配置CIS。这样,则不会引发音频数据的传输中断,可以保证音频数据的正常传输,提升用户体验。
结合第二方面,在一种可能的设计方式中,在电子设备去激活第二CIS,停止通过第二CIS与第二耳塞传输音频数据,继续通过第一CIS与第一耳塞传输音频数据之后,即TWS耳机由双耳状态切换为第一单耳状态后,该TWS耳机还可能会由第一单耳状态重新切换为双耳状态。具体的,本申请实施例的方法还可以包括:电子设备确定TWS耳机由第一单耳状态切换为双耳状态;响应于确定TWS耳机由第一单耳状态切换为双耳状态,电子设备继续通过第一CIS与第一耳塞传输音频数据,并激活第二CIS,通过第二CIS与第二耳塞传输音频数据。
换言之,由于电子设备为TWS耳机配置了两个CIS;因此,当TWS耳机由单耳状态(如第一单耳状态)切换为双耳状态时,电子设备只要激活未被使用的耳塞对应的CIS(如第二CIS)即可。这样,则不会引发音频数据的传输中断,可以保证音频数据的正常传输,提升用户体验。
结合第二方面,在另一种可能的设计方式中,在电子设备通过第一CIG的第一CIS与第一耳塞传输音频数据之前,可以确定TWS耳机处于双耳状态;为TWS耳机配置包括第一CIS和第二CIS的第一CIG;为第一耳塞配置第一CIS,为第二耳塞配置第 二CIS;激活第一CIS和第二CIS。其中,即使TWS耳机由双耳状态切换为单耳状态,电子设备只要去激活对应的CIS即可。这样,则不会引发音频数据的传输中断,可以保证音频数据的正常传输,提升用户体验。
结合第二方面,在另一种可能的设计方式中,在TWS耳机处于双耳状态的情况下,电子设备配置第一CIG时,上述第一CIS和第二CIS被配置为交织调度的传输方式。在交织调度的传输方式中,第一CIS的锚点是第一CIG的CIG锚点,第二CIS的锚点与第一CIS的CIS事件中的第一个子事件的结束点相同,第一CIS的第二个子事件的起始点是第二CIS的第一个子事件的结束点。其中,在双耳状态下,相比于串行调度的传输方式,交织调度的传输方式的优势可以参考第一方面的可能的设计方式中的描述,本申请实施例这里不予赘述。
结合第二方面,在另一种可能的设计方式中,在TWS耳机处于双耳状态的情况下,电子设备配置第一CIG时,上述第一CIS和第二CIS被配置为联合调度的传输方式。在联合调度的传输方式中,第一CIS的锚点和第二CIS的锚点均为第一CIG的CIG锚点。其中,在双耳状态下,联合调度的传输方式的优势可以参考第一方面的可能的设计方式中的描述,本申请实施例这里不予赘述。
结合第二方面,在另一种可能的设计方式中,在电子设备确定TWS耳机由双耳状态切换为第一单耳状态之后,电子设备可能会接收到用户的挂起操作。该挂起操作用于触发TWS耳机暂停播放音频数据。为了避免TWS耳机由双耳状态切换为单耳状态后,电子设备为耳塞配置的CIS的传输方式并不适用于单耳状态的问题,响应于该挂起操作,电子设备可以重新确定TWS耳机的当前状态(如第一单耳状态),然后为TWS耳机重新配置第一CIG,重配的第一CIG包括重配的第一CIS和重配的第二CIS。
其中,重配的第一CIS和重配的第二CIS适用于TWS耳机切换后的状态,如第一单耳状态。例如,重配的第一CIS和重配的第二CIS可以被配置为串行调度的传输方式。其中,串行调度的传输方式的详细介绍可以参考本申请实施例其他部分的描述,这里不予赘述。
并且,电子设备可以为第一耳塞配置重配的第一CIS,并激活重配的第一CIS,通过重配的第一CIS从重配的第一CIS的锚点开始与第一耳塞传输音频数据。重配的第二CIS在第一单耳状态下不被激活。
可以理解,响应于上述挂起操作,上述音频数据挂起(即停止)。电子设备在音频数据挂起过程中重新配置CSI,这样在业务重新开始后,电子设备便可以通过重新配置的CIS传输音频数据。如此,则不会因为重新配置CIS而导致业务中断。
第三方面,本申请实施例提供一种音频数据传输方法,可以用于电子设备与TWS耳机的音频数据传输,该TWS耳机包括第一耳塞和第二耳塞。电子设备确定TWS耳机处于第一单耳状态时,可以为第一耳塞配置包括第一CIS和第二CIS的第一CIG。其中,第一单耳状态是第一耳塞单独作为电子设备的音频输入/输出设备被使用的状态。电子设备可以为第一耳塞配置第一CIS,并激活第一CIS,通过第一CIS与第一耳塞传输音频数据;第二CIS在第一单耳状态下处于不激活状态,即第二CIS在第一单耳状态下不被激活。也就是说,电子设备确定TWS耳机处于单耳状态时,仍然配置两个CIS(第一CIS和第二CIS),但是在单耳状态下,只激活其中一个CIS,不激活另一 个CIS。
本申请实施例中,即使TWS耳机处于单耳状态,电子设备也可以为TWS耳机配置包括两个CIS(如第一CIS和第二CIS)的第一CIG。这样,如果在音乐播放或者语音通信的过程中TWS耳机由单耳状态切换为双耳状态,电子设备不需要重新配置CIS,只要激活对应的CIS(如第二CIS)即可。这样,则不会引发音频数据的传输中断,可以保证音频数据的正常传输,提升用户体验。
结合第三方面,在一种可能的设计方式中,TWS耳机还可能会由第一单耳状态重新切换为双耳状态。具体的,本申请实施例的方法还可以包括:电子设备确定TWS耳机由第一单耳状态切换为双耳状态;响应于确定TWS耳机由第一单耳状态切换为双耳状态,电子设备激活第二CIS,通过第二CIS与第二耳塞传输音频数据,并继续通过第一CIS与第一耳塞传输音频数据。
换言之,由于电子设备为TWS耳机配置了两个CIS;因此,当TWS耳机由单耳状态(如第一单耳状态)切换为双耳状态时,电子设备只要激活未被使用的耳塞对应的CIS(如第二CIS)即可。这样,则不会引发音频数据的传输中断,可以保证音频数据的正常传输,提升用户体验。
结合第三方面,在另一种可能的设计方式中,TWS耳机由单耳状态(如第一单耳状态)切换为双耳状态后,TWS耳机还可能会由双耳状态重新切换为第二单耳状态。电子设备确定TWS耳机由双耳状态切换为第二单耳状态后,可以去激活第一CIS,停止通过第一CIS与第一耳塞传输音频数据,并继续通过第二CIS与第二耳塞传输音频数据。
结合第三方面,在另一种可能的设计方式中,在TWS耳机处于单耳状态的情况下,电子设备配置第一CIG时,上述第一CIS和第二CIS被配置为串行调度的传输方式。在串行调度的传输方式中,第一CIS的锚点是第一CIG的CIG锚点,第二CIS的锚点与第一CIS的CIS事件的结束点相同。其中,串行调度的传输方式的详细介绍可以参考本申请实施例其他部分的描述,这里不予赘述。
结合第三方面,在另一种可能的设计方式中,在电子设备确定TWS耳机由单耳状态(如第一单耳状态)切换为双耳状态之后,电子设备可能会接收到用户的挂起操作。该挂起操作用于触发TWS耳机暂停播放音频数据。为了避免TWS耳机由单耳状态切换为双耳状态后,电子设备为耳塞配置的CIS的传输方式并不适用于双耳状态的问题,响应于该挂起操作,电子设备可以重新确定TWS耳机的当前状态(如双耳状态),然后为TWS耳机重新配置第一CIG,重配的第一CIG包括重配的第一CIS和重配的第二CIS。
其中,重配的第一CIS和重配的第二CIS适用于TWS耳机切换后的状态,如双耳状态。例如,重配的第一CIS和重配的第二CIS可以被配置为交织调度的传输方式或者联合调度。其中,交织调度的传输方式和联合调度的传输方式的详细介绍可以参考本申请实施例其他部分的描述,这里不予赘述。
第四方面,本申请实施例提供一种电子设备,该电子设备包括:一个或多个处理器、存储器和无线通信模块。其中,存储器和无线通信模块与一个或多个处理器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令。当一个或多个处 理器执行计算机指令时,电子设备执行如第一方面至第三方面及其可能的实现方式中任一所述的音频数据传输方法。
第五方面,提供一种蓝牙通信***,该蓝牙通信***可以包括:TWS耳机,以及如上述第四方面所述的电子设备。
第六方面,提供一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如第一方面至第三方面及其可能的实现方式中任一所述的音频数据传输方法。
第七方面,本申请提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如第一方面至第三方面及其可能的实现方式中任一所述的音频数据传输方法。
可以理解地,上述提供的第四方面所述的电子设备,第五方面所述的蓝牙通信***,第六方面所述的计算机存储介质,以及第七方面所述的计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种音频数据传输的通信***组成示意图;
图2为本申请实施例提供的一种音频数据传输原理示意图;
图3为本申请实施例提供的另一种音频数据传输的通信***组成示意图;
图4为本申请实施例提供的一种TWS耳机的产品形态实例示意图;
图5为本申请实施例提供的一种TWS耳机的耳塞的硬件结构示意图;
图6A为本申请实施例提供的一种电子设备的硬件结构示意图;
图6B为本申请实施例提供的一种ISO信道传输音频数据的原理示意图;
图7为本申请实施例提供的一种基于BLE的音频协议框架;
图8为本申请实施例提供的一种配置CIG和创建CIS的流程示意图;
图9为本申请实施例提供的一种音频数据传输方法流程图;
图10为本申请实施例提供的一种交织调度的传输方式原理示意图;
图11为本申请实施例提供的另一种音频数据传输方法流程图;
图12为本申请实施例提供的一种串行调度的传输方式原理示意图;
图13为本申请实施例提供的另一种音频数据传输方法流程图;
图14为本申请实施例提供的一种联合调度的传输方式原理示意图。
具体实施方式
本申请实施例提供一种音频数据传输方法,可以应用于电子设备(如手机)与TWS耳机进行音频数据(音频流,stream)传输。
在电子设备与TWS耳机的耳塞(一个或两个耳塞)进行音频数据传输之前,电子设备可以先与耳塞配对,然后建立异步面向连接(asynchronous connection-oriented link,ACL)链路;最后通过ACL链路向耳塞配置ISO信道。其中,电子设备可以根据TWS耳机被使用的状态(如单耳状态或者双耳状态)为TWS耳机配置CIG。电子设备通过ACL链路向耳塞配置ISO信道具体是指:电子设备通过ACL链路建立该CIG中的CIS。该CIS用于电子设备与耳塞传输音频数据。该CIS承载在ISO信道上。
一方面,TWS耳机的左耳塞或右耳塞中的任一个作为一个电子设备的音频输入/输出设备,可以单独使用(称为单耳状态),实现音乐播放或者语音通信等功能。例如,如图4所示,TWS耳机101包括耳塞101-1和耳塞101-2。如图1所示,TWS耳机101的耳塞101-1作为电子设备100(如手机)的音频输入/输出设备,可以单独使用实现音乐播放或者语音通信等功能。
一般而言,在上述单耳状态下,电子设备100为TWS耳机101配置仅包括一个CIS的CIG。图1所示的电子设备100可以与耳塞101-1建立ACL链路,并通过该ACL链路建立CIS。该CIS用于电子设备100与耳塞101-1传输音频数据。该CIS承载在电子设备100与耳塞101-1的ISO信道上。需要注意的是,该CIG中仅包括一个CIS。例如,如图2所示,CIG事件(x)中仅包括一个CIS事件(x)。CIG事件(x+1)中仅包括一个CIS事件(x+1)。
其中,一个CIG包括多个CIG事件。CIG事件(x)和CIG事件(x+1)都是CIG中的CIG事件(CIG_event)。电子设备100与耳塞101-1可以在一个CIG中的多个CIG事件中传输音频数据。例如,电子设备100与耳塞101-1可以在一个CIG中的CIG事件(x)和CIG事件(x+1)等CIG事件中传输音频数据。
另一方面,TWS耳机的左耳塞和右耳塞作为一个电子设备的音频输入/输出设备,可以一起使用(称为双耳状态),实现音乐播放或者语音通信等功能。例如,如图3所示,TWS耳机101的耳塞101-1和耳塞101-2作为电子设备100(如手机)的音频输入/输出设备,可以一起使用实现音乐播放或者语音通信等功能。
一般而言,在双耳状态下,电子设备100为TWS耳机101配置包括两个CIS(如CIS(1)和CIS(2))的CIG。电子设备可以分别与TWS耳机的左右耳塞建立ACL链路。
例如,电子设备可以与耳塞101-1建立ACL链路1,与耳塞101-2建立ACL链路2。电子设备100可以通过ACL链路1建立CIS(1),CIS(1)用于与耳塞101-1传输音频数据。CIS(1)承载在电子设备100与耳塞101-1的ISO信道1上。电子设备100可以通过ACL链路2建立CIS(2),CIS(2)用于与耳塞101-2传输音频数据。CIS(2)承载在电子设备100与耳塞101-2的ISO信道2上。
需要注意的是,该CIG中包括两个CIS(如CIS(1)和CIS(2))。例如,如图10中的(a)或图12中的(a)所示,CIG事件(x)中包括CIS(1)事件(x)和CIS(2)事件(x)。其中,CIG事件(x)是CIG中的一个CIG事件。一个CIG包括多个CIG事件。电子设备100与耳塞101-1和耳塞101-2可以在一个CIG中的多个CIG事件中传输音频数据。
其中,同一个CIG中的多个CIS可以共享相同的CIG播放点(CIG_presentation point)。CIG播放点是电子设备100发送音频数据之后的时间点。CIS(1)对应的耳塞101-1和CIS(2)对应的耳塞101-2可以在上述CIG播放点同时播放接收到的音频数据,从而可以实现这两个耳塞对音频流的播放级同步(即两个耳塞同时播放音频数据)。
综上所述,现有技术中,在单耳状态下,电子设备100为TWS耳机101配置仅包括一个CIS的CIG。在双耳状态下,电子设备100为TWS耳机101配置包括两个CIS 的CIG。那么,如果在音乐播放或者语音通信的过程中发生单双耳切换(如由单耳状态切换为双耳状态,或者双耳状态切换为单耳状态),电子设备100则需要重新配置CIS。例如,在由单耳状态切换为双耳状态时,电子设备100需要重新为两个耳塞配置一个CIG中的两个CIS。而重新配置CIS需要一定的时间,这样,则会导致音频数据的中断,影响用户体验。
为了解决上述场景切换过程中,音频数据中断的问题,在本申请实施例中,电子设备100在为TWS耳机101的耳塞配置CIS时,无论是单耳状态或者双耳状态,都可以为TWS耳机101配置包括两个CIS的CIG。这样,在双耳状态下,电子设备100可以激活这两个CIS,然后通过这两个CIS与TWS耳机101的左右耳塞进行音频数据传输。而在单耳状态下,电子设备100可以仅激活一个CIS,并通过该CIS与对应的耳塞进行音频数据传输。如果在音乐播放或者语音通信的过程中发生单双耳切换,电子设备100不需要重新配置CIS,只要激活或者去激活对应的CIS即可。这样,则不会引发音频数据的传输中断,可以保证音频数据的正常传输,提升用户体验。
其中,本申请实施例中的单耳状态可以包括第一单耳状态和第二单耳状态。第一单耳状态为第一耳塞单独作为电子设备的音频输入/输出设备被使用的状态。第二单耳状态为第二耳塞单独作为电子设备的音频输入/输出设备被使用的状态。双耳状态为第一耳塞和第二耳塞一起作为电子设备的音频输入/输出设备被使用的状态。例如,第一耳塞是耳塞101-1,第二耳塞是耳塞101-2。
示例性的,上述电子设备100可以是手机(如图1或者图3所示的手机100)、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备、媒体播放器、电视机等设备,本申请实施例对该设备的具体形态不作特殊限制。在本申请实施例中,电子设备100的结构可以如图6A所示,在以下实施例中将详细介绍。
请参考图4,为本申请实施例提供的一种TWS耳机的产品形态示意图。如图4所示,TWS耳机101可以包括:耳塞101-1、耳塞101-2和耳塞盒101-3。该耳塞盒可以用于收纳TWS耳机的左右耳塞。图4仅以举例方式给出TWS耳机的一种产品形态实例示意图,本申请实施例提供的***设备的产品形态包括但不限于图4所示的TWS耳机101。
请参考图5,为本申请实施例提供的一种TWS耳机的耳塞(左耳塞或右耳塞)的结构示意图。如图5所示,TWS耳机101的耳塞(如耳塞101-2)可以包括:处理器510、存储器520、传感器530、无线通信模块540、受话器550、麦克风560以及电源570。
其中,存储器520可以用于存储应用程序代码,如用于与TWS耳机101的另一个耳塞(如耳塞101-2)建立无线连接,以及使得耳塞与上述电子设备100(如手机100)进行配对连接的应用程序代码。处理器510可以控制执行上述应用程序代码,以实现本申请实施例中TWS耳机的耳塞的功能。
存储器520中还可以存储有用于唯一标识该耳塞的蓝牙地址,以及存储有TWS 耳机的另一个耳塞的蓝牙地址。另外,该存储器520中还可以存储有与该耳塞之前成功配对过的电子设备的连接数据。例如,该连接数据可以为与该耳塞成功配对过的电子设备的蓝牙地址。基于该连接数据,该耳塞能够与该电子设备自动配对,而不必配置与其之间的连接,如进行合法性验证等。上述蓝牙地址可以为媒体访问控制(media access control,MAC)地址。
传感器530可以为距离传感器或接近光传感器。耳塞可以通过该传感器530确定是否被用户佩戴。例如,耳塞可以利用接近光传感器来检测耳塞附近是否有物体,从而确定耳塞是否被用户佩戴。在确定耳塞被佩戴时,耳塞可以打开受话器550。在一些实施例中,该耳塞还可以包括骨传导传感器,结合成骨传导耳机。利用该骨传导传感器,耳塞可以获取声部振动骨块的振动信号,解析出语音信号,实现语音功能。在另一些实施例中,该耳塞还可以包括触摸传感器,用于检测用户的触摸操作。在另一些实施例中,该耳塞还可以包括指纹传感器,用于检测用户指纹,识别用户身份等。在另一些实施例中,该耳塞还可以包括环境光传感器,可以根据感知的环境光的亮度,自适应调节一些参数,如音量大小。
无线通信模块540,用于支持TWS耳机的耳塞与各种电子设备,如上述电子设备100之间的短距离数据交换。在一些实施例中,该无线通信模块540可以为蓝牙收发器。TWS耳机的耳塞可以通过该蓝牙收发器与上述电子设备100之间建立无线连接,以实现两者之间的短距离数据交换。
至少一个受话器550,也可以称为“听筒”,可以用于将音频电信号转换成声音信号并播放。例如,当TWS耳机的耳塞作为上述电子设备100的音频输出设备时,受话器550可以将接收到的音频电信号转换为声音信号并播放。
至少一个麦克风560,也可以称为“话筒”,“传声器”,用于将声音信号转换为音频电信号。例如,当TWS耳机101的耳塞作为上述电子设备100的音频输入设备时,在用户说话(如通话或发语音消息)的过程中,麦克风560可以采集用户的声音信号,并将其转换为音频电信号。上述音频电信号即为本申请实施例中的音频数据。
电源570,可以用于向TWS耳机101的耳塞包含的各个部件供电。在一些实施例中,该电源570可以是电池,如可充电电池。
通常,TWS耳机101会配有一耳塞盒(如,图4中所示的101-3)。该耳塞盒可以用于收纳TWS耳机的左右耳塞。如图4所示,该耳塞盒101-3可以用于收纳TWS耳机的耳塞101-1和耳塞101-2。另外,该耳塞盒还可以为TWS耳机101的左右耳塞充电。相应的,在一些实施例中,上述耳塞还可以包括:输入/输出接口580。输入/输出接口580可以用于提供TWS耳机的耳塞与耳塞盒(如上述耳塞盒101-3)之间的任何有线连接。
在一些实施例中,输入/输出接口580可以为电连接器。当TWS耳机101的耳塞置于耳塞盒中时,TWS耳机101的耳塞可以通过该电连接器与耳塞盒(如与耳塞盒的输入/输出接口)电连接。在该电连接建立后,耳塞盒可以为TWS耳机的耳塞的电源570充电。在该电连接建立后,TWS耳机101的耳塞还可以与耳塞盒进行数据通信。例如,TWS耳机101的耳塞可以通过该电连接接收来自耳塞盒的配对指令。该配对命令用于指示TWS耳机101的耳塞打开无线通信模块540,从而使得TWS耳机101的 耳塞可以采用对应的无线通信协议(如蓝牙)与电子设备100进行配对连接。
当然,上述TWS耳机101的耳塞还可以不包括输入/输出接口580。在这种情况下,耳塞可以基于通过上述无线通信模块540与耳塞盒建立的无线连接,实现充电或者数据通信功能。
另外,在一些实施例中,耳塞盒(如上述耳塞盒101-3)还可以包括处理器,存储器等部件。该存储器可以用于存储应用程序代码,并由耳塞盒的处理器来控制执行,以实现耳塞盒的功能。例如。当用户打开耳塞盒的盒盖时,耳塞盒的处理器通过执行存储在存储器中的应用程序代码,可以响应于用户打开盒盖的操作向TWS耳机的耳塞发送配对命令等。
可以理解的是,本申请实施例示意的结构并不构成对TWS耳机101的耳塞的具体限定。其可以具有比图5中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。例如,该耳塞还可以包括指示灯(可以指示耳塞的电量等状态)、防尘网(可以配合听筒使用)等部件。图5中所示出的各种部件可以在包括一个或多个信号处理或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
需要说明的是,TWS耳机101的左右耳塞的结构可以相同。例如,TWS耳机101的左右耳塞可以都包括图5中所示的部件。或者,TWS耳机101的左右耳塞的结构也可以不同。例如,TWS耳机101的一个耳塞(如右耳塞)可以包括图5中所示的部件,而另一个耳塞(如左耳塞)可以包括图5中除麦克风560之外的其他的部件。
以上述电子设备是手机100为例,图6A示出了电子设备100的结构示意图。如图6A所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中,传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随 后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。例如,在本申请实施例中,电子设备100可以利用无线通信模块160,通过无线通信技术,如蓝牙(BT)与***设备建立无线连接。基于建立的无线连接,电子设备100可以向***设备发送语音数据,还可以接收来自***设备的语音数据。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及 应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。在一些实施例中,ISP可以设置在摄像头193中。摄像头193用于捕获静态图像或视频。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。例如,在本申请实施例中,处理器110可以通过执行存储在内部存储器121中的指令,通过无线通信模块160与***设备建立无线连接,以及与***设备进行短距离数据交换,以通过***设备实现通话、播放音乐等功能。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。在本申请实施例中,在电子设备100与***设备之间采用无线通信技术,如蓝牙建立了无线连接后,电子设备100可以将***设备的蓝牙地址存储在内部存储器121中。在一些实施例中,当***设备为包含两个主体的设备,如TWS耳机时,TWS耳机的左右耳塞分别有各自的蓝牙地址,电子设备100可以将TWS耳机的左右耳塞的蓝牙地址关联存储在内部存储器121中。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
在本申请实施例中,当电子设备100与***设备101,如TWS耳机建立了无线连接时,TWS耳机可以作为电子设备100的音频输入/输出设备使用。示例性的,音频模块170可以接收无线通信模块160传递的音频电信号,实现通过TWS耳机接听电话、播放音乐等功能。例如,在用户打电话的过程中,TWS耳机可以采集用户的声音信号,并转换为音频电信号后发送给电子设备100的无线通信模块160。无线通信模块160将该音频电信号传输给音频模块170。音频模块170可以将接收到的音频电信号转换为数字音频信号,并进行编码后传递至移动通信模块150。由移动通信模块150传输至通话对端设备,以实现通话。又例如,用户在使用电子设备100的媒体播放器播放音乐时,应用处理器可以将媒体播放器播放的音乐对应的音频电信号传输至音频模块170。由音频模块170将该音频电信号传输至无线通信模块160。无线通信模块160可以将音频电信号发送给TWS耳机,以便TWS耳机将该音频电信号转换为声音信号后播放。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。陀螺仪传感器180B还可以用于导航,体感游戏场景。气压传感器180C用于测量气压。磁传感器180D包括霍尔传感器。加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。距离传感器180F,用于测量距离。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。环境光传感器180L用于感知环境光亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。温度传感器180J用于检测温度。触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给 应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。骨传导传感器180M可以获取振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。
本申请实施例相关术语介绍:
一、CIG和CIS。
CIG的标识(CIG_ID)用于标识CIG。例如,CIG(1)和CIG(2)用于表示不同的CIG。一个CIG中可以包括多个CIS。在ISO信道的传输机制中,源端设备与每个目的端设备的传输通道定义为CIS。每个目的端设备对应一个CIS。例如,以手机100与TWS耳机101的左右耳塞为例。手机100可以为TWS耳机101的左右耳塞配置一个CIG,并配置该CIG中包括两个CIS,如CIS(1)和CIS(2)。耳塞101-1对应CIS(1),耳塞101-2对应CIS(2)。每个CIS的标识(CIS_ID)不同。例如,CIS(1)和CIS(2)的标识不同。同一CIG中的多个CIS拥有共同的CIG同步点和CIG播放点,用于实现多个***设备对音频数据的播放级同步。
一个CIG包括多个CIG事件(CIG_event)。例如,CIG(1)可以包括图6B所示的CIG事件(x)和CIG事件(x+1)等。每个CIG事件在时间上归属于一个ISO间隔(ISO_interval)。例如,如图6B所示,CIG事件(x)在时间上归属于CIG(x)锚点(anchor point)至CIG(x+1)锚点之间的ISO间隔,CIG事件(x+1)在时间上归属于CIG(x+1)锚点至CIG(x+2)锚点之间的ISO间隔。CIG锚点是对应CIG事件的开始时间点。例如,CIG(x)锚点是CIG事件(x)的开始时间点。
每个CIG事件可以包括多个CIS事件(CIS_event)。例如,如图6B所示,CIG事件(x)包括CIS(1)事件(x)和CIS(2)事件(x),CIG事件(x+1)包括CIS(1)事件(x+1)和CIS(2)事件(x+1)。
每个CIS可以包括多个CIS事件。例如,CIS(1)可以包括图6B所示的CIS(1)事件(x)和CIS(1)事件(x+1)。CIS(2)可以包括图6B所示的CIS(2)事件(x)和CIS(2)事件(x+1)。
每个CIS事件在时间上归属于一个ISO间隔。例如,如图6B所示,CIS(1)事件(x)在时间上归属于CIS(1).x锚点至CIS(1).x+1锚点之间的ISO间隔,CIS(2)事件(x)在时间上归属于CIS(2).x锚点至CIS(2).x+1锚点之间的ISO间隔,CIS(1)事件(x+1)在时间上归属于CIS(1).x+1锚点至CIS(1).x+1锚点之 间的ISO间隔。
其中,ISO间隔是两个连续的CIS锚点之间的时间。两个连续的CIS锚点是指同一CIS的连续的两个锚点。例如,CIS(1).x锚点和CIS(1).x+1锚点是CIS(1)的连续的两个锚点。CIS锚点是对应CIS事件的开始时间点。例如,CIS(1).x锚点是CIS(1)事件(x)的开始时间点。
二、子事件(Sub_event)。
每个CIS在一个ISO间隔内可以定义NSE个子事件。即每个CIS事件由子事件数量(number of subevents,NSE)个子事件组成。其中,NSE大于或者等于1。例如,如图6B、图10中的(a)或者图12中的(a)中任一附图所示,CIS(1)的NSE(即N1)等于2,CIS(1)事件(x)由子事件(1_1)和子事件(1_2)组成;CIS(2)的NSE(即N2)等于2,CIS(2)事件(x)由子事件(2_1)和子事件(2_2)组成。
如图6B所示,每个子事件由一个“M->S”和一个“S->M”组成。其中,“M->S”用于源端设备向目的端设备发送音频数据,用于目的端设备接收源端设备发送的音频数据。“S->M”用于目的端设备向源端设备发送音频数据,用于源端设备接收目的端设备发送的音频数据。例如,CIS(1)的“M->S”用于手机100向耳塞101-1发送音频数据,用于耳塞101-1接收手机100发送的音频数据。CIS(1)的“S->M”用于耳塞101-1向手机100发送数据(如音频数据或者反馈信息),用于手机100接收耳塞101-1发送的数据。CIS(2)的“M->S”用于电子设备1手机100向耳塞101-2发送音频数据,即用于耳塞101-2接收手机100发送的音频数据。CIS(2)的“S->M”用于耳塞101-2向手机100发送数据(如音频数据或者反馈信息),用于手机100接收耳塞101-2发送的数据。上述反馈信息可以为确认应答(acknowledgement,ACK)或者否定应答(negative acknowledgement,NACK)。
每个子事件(Sub_event)在时间上归属于一个子间隔(Sub_interval)。一个CIS的子间隔可以为同一CIS事件中的一个子事件的开始时间点到下一个子事件的开始时间点之间的时间。例如,如图10中的(a)或者图12中的(a)中任一附图所示,CIS(1)的子间隔(即CIS(1)_子间隔)可以为CIS(1)事件(x)中的子事件(1_1)的开始时间点到子事件(1_2)的开始时间点之间的时间。CIS(2)的子间隔(即CIS(2)_子间隔)可以为CIS(2)事件(x)中的子事件(2_1)的开始时间点到子事件(2_2)的开始时间点之间的时间。
需要注意的是,在ISO间隔一定的情况下,上述NSE越大,一个ISO间隔中包括的子事件(Sub_event)越多,该ISO间隔中传输的数据包越多,ISO信道的占空比越高。手机100可以根据音频数据对ISO信道的占空比的要求,确定NSE。
请参考图7,其示出本申请实施例提供的一种基于BLE的音频协议框架。如图7所示,该协议框架可包括:应用(application)层、主机(Host)、主机控制器接口(Host Controller Interface,HCI)和控制器(controller)。
其中,控制器包括链路层和物理层。物理层负责提供数据传输的物理通道。通常情况下,一个通信***中存在几种不同类型的信道,如控制信道、数据信道、语音信道等等。链路层包括ACL链路和ISO信道。其中,ACL链路用于传输设备间的控制消息,如内容控制消息(如上一首、下一首等)。ISO信道可用于传输设备间的等时 数据(如即音频数据)。
Host和Controller通过HCI进行通讯。Host和Controller通信的介质是HCI指令。Host可以实现于设备的应用处理器中(application processor,AP),Controller可以实现于该设备的蓝牙芯片中。可选的,在小型设备中,Host和Controller可以实现于同一个处理器或控制器中,此时HCI是可选的。
为了便于理解,以下结合附图对本申请实施例提供的音频数据传输方法进行详细介绍。以下实施例中均以电子设备为手机100,TWS耳机的第一耳塞是TWS耳机101的耳塞101-1,第二耳塞是TWS耳机101的耳塞101-2为例进行说明。
本申请实施例中,无论TWS耳机101的一个耳塞(如耳塞101-1)单独作为手机100的音频输入/输出设备被使用(即单耳状态),还是TWS耳机101的两个耳塞(耳塞101-1和耳塞101-2)一起作为手机100的音频输入/输出设备被使用(即双耳状态),手机100都可以为TWS耳机101配置包括两个CIS(如第一CIS和第二CIS)的第一CIG。
请参考图8,本申请实施例结合图7所示的基于BLE的音频协议框架,说明手机100配置第一CIG和创建CIS的过程。其中,手机100配置CIG和创建CIS是在手机100与耳塞(耳塞101-1和/或耳塞101-2)已经处于连接(Connection)状态的前提下进行的。手机100和耳塞都具有Host和链路层LL(controller中),Host和LL之间通过HCI通信。其中,耳塞的Host和LL在图8中未示出。
其中,手机100的Host可以通过HCI指令设置第一CIG的CIG参数。CIG参数用于创建等时数据传输通道(即CIS)。手机100与耳塞协商CIG参数的方法,可以参考常规技术中电子设备之间协商CIG参数的方法,本申请实施例这里不予介绍。
示例性的,手机100的Host可以响应于应用层901的业务请求,根据音频数据与手机100的LL设置第一CIG的CIG参数。针对不同的音频数据,手机100可以设置不同的CIG参数。
具体的,手机100的Host可以通过HCI向手机100的LL发送CIG参数设置信息。例如,该CIG参数设置信息可以为HCI指令“LE Set CIG parameters”。相应的,手机100的LL可以返回第一确认信息。例如,第一确认信息可以为响应消息“Command Complete”。
随后,手机100的Host可以通过HCI指令发起创建CIS。具体的,手机100的Host可以通过HCI向手机100的LL发送CIS创建信息。例如,CIS创建信息可以为HCI指令“LE Create CIS”。相应的,手机100的LL可以返回第二确认信息。例如,该第二确认信息可以为“HCI Command Status”。
向手机LL发送CIS连接响应信息(例如LL_CIS_RSP消息);手机LL向第一耳塞LL发送第四确认信息(例如LL_CIS_IND消息),并且手机LL向手机host发送CIS连接已建立信息(例如LE CIS Established消息)。至此,手机和第一耳塞之间建立了CIS连接链路。
手机100的LL可以通过CIS请求(如LL_CSI_REQ)向耳塞(如耳塞101-1)请求创建CIS。耳塞(如耳塞101-1)向手机100的LL回复CIS响应(如LL_CIS_RSP)。手机100的LL向耳塞(如耳塞101-1)发送CIS确认(如LL_CIS_IND)。并且,手 机的LL可以向手机的Host发送CIS连接已建立信息(例如LE CIS Established消息),附图未示出。
至此,手机100与耳塞之间的CIS建立完成。已建立的CIS被激活后,手机100与耳塞可以进行音频数据传输。
可以理解,虽然无论TWS耳机101处于单耳状态或者双耳状态,手机100都可以为TWS耳机101配置包括两个CIS的第一CIG。但是,在单耳状态和双耳状态下,手机100与TWS耳机101激活的CIS并不相同。具体的:
在双耳状态下,手机100可以激活第一CIS和第二CIS,通过第一CIS与耳塞101-1传输音频数据,通过第二CIS与耳塞101-2传输音频数据。如此,当TWS耳机101由双耳状态切换为单耳状态时,手机100便可以去激活一个CIS,继续使用另一个CIS与对应的耳塞进行音频数据传输。
在单耳状态下,手机100仅激活一个CIS(如第一CIS),通过激活的CIS与对应的耳塞(如第一耳塞)传输音频数据。其中,手机100不激活另一个CIS(如第二CIS)。如此,当TWS耳机101由单耳状态切换为双耳状态时,电子设备100便可以激活另一个CIS,使用两个CIS与两个耳塞进行音频数据传输。
本申请实施例中,如果在音乐播放或者语音通信的过程中发生单双耳切换,电子设备100不需要重新配置ISO信道(即重新配置CIS),只要激活或者去激活对应的CIS即可。这样,则不会引发音频数据的传输中断,可以保证音频数据的正常传输,提升用户体验。
当用户希望使用TWS耳机101时,可打开TWS耳机101的耳塞盒101-3的盒盖。此时,耳塞101-1和耳塞101-2可以自动配对连接。
并且,耳塞盒101-3的盒盖被打开后,耳塞101-1和耳塞101-2中的任一耳塞(如耳塞101-2)可对外发送配对广播。如果手机100已经打开了蓝牙功能,则手机100可以接收到该配对广播并提示用户已经扫描到相关的蓝牙设备(如耳塞101-2)。当用户在手机100上选中耳塞101-2作为连接设备后,手机100可与耳塞101-2进行配对。
在耳塞101-2与手机100配对后,耳塞101-2可以通过与耳塞101-1之间的蓝牙连接,向耳塞101-1发送手机100的蓝牙地址,并通知耳塞101-1对外发送配对广播。这样,手机100可以接收耳塞101-1发送的配对广播并与耳塞101-1进行配对。
其中,手机100与耳塞101-2配对后,可以与耳塞101-2建立ACL链路1。手机100与耳塞101-1配对后,可以与耳塞101-1建立ACL链路2。
耳塞101-2还可以向手机100发送耳塞101-1的MAC地址,以向手机100指示耳塞101-1与耳塞101-2是同一个***设备(如TWS耳机101)的两个主体。
本申请实施例中,手机100可以通过以下方式(1)-方式(3)判断TWS耳机101处于单耳状态还是双耳状态。其中,单耳状态包括第一单耳状态和第二单耳状态。即手机100可以通过以下方式(1)-方式(3)执行图9所示的S900:
方式(1):耳塞101-1和耳塞101-2是否都被拿出耳塞盒101-3。
耳塞盒101-3的盒盖打开后,用户可以从耳塞盒101-3中拿出一个或两个耳塞。其中,耳塞可以通过传感器(如光传感器或者接触传感器等)或者电连接器检测到耳塞被拿出耳塞盒101-3。耳塞被拿出耳塞盒101-3后,可以向手机100指示该耳塞被拿 出耳塞盒101-3。例如,以耳塞101-1为例。耳塞101-1可以通过ACL链路1向手机100发送控制命令,以指示该耳塞被拿出耳塞盒101-3。
如果手机100确定TWS耳机101的两个耳塞都被拿出耳塞盒101-3,手机100则可以确定TWS耳机101处于双耳状态。即TWS耳机101的两个耳塞(耳塞101-1和耳塞101-2)一起作为手机100的音频输入/输出设备被使用。
如果手机100确定只有TWS耳机101的一个耳塞被拿出耳塞盒101-3,而另一个耳塞并未被拿出耳塞盒101-3,手机100则可以确定TWS耳机101处于单耳状态。即TWS耳机101的一个耳塞(如耳塞101-1)单独作为手机100的音频输入/输出设备被使用。
方式(2):耳塞101-1和耳塞101-2是否都被佩戴。
用户从耳塞盒101-3中拿出耳塞后,可以将耳塞佩戴在耳朵上。其中,耳塞可以通过传感器(如光传感器或者骨传感器等),检测耳塞是否被佩戴。耳塞被佩戴后,可以向手机100指示该耳塞被佩戴。例如,以耳塞101-1为例。耳塞101-1可以通过ACL链路1向手机100发送控制命令,以指示该耳塞被佩戴。
可以理解,即使TWS耳机101的两个耳塞都被拿出耳塞盒101-3,但是用户可能只使用一个耳塞与手机100进行音频数据传输。基于此,本申请实施例中,可以通过判断耳塞101-1和耳塞101-2是否都被佩戴,来确定TWS耳机101处于双耳状态还是单耳状态。
如果手机100确定TWS耳机101的两个耳塞都被佩戴,手机100则可以确定TWS耳机101处于双耳状态。即TWS耳机101的两个耳塞(耳塞101-1和耳塞101-2)一起作为手机100的音频输入/输出设备被使用。
如果手机100确定只有TWS耳机101的一个耳塞被佩戴,而另一个耳塞并未被佩戴,手机100则可以确定TWS耳机101处于单耳状态。即TWS耳机101的一个耳塞(如耳塞101-1)单独作为手机100的音频输入/输出设备被使用。
方式(3):耳塞101-1和耳塞101-2是否配对连接。
在双耳状态下,TWS耳机101的两个耳塞配对连接。而在单耳状态下,TWS耳机101的两个耳塞则未配对连接。因此,手机100可以通过耳塞101-1和耳塞101-2是否配对连接,来确定TWS耳机101处于双耳状态还是单耳状态。
例如,在一些使用场景中,用户从耳塞盒101-3中拿出一个耳塞(如耳塞101-1)后,不会继续拿出另一个耳塞(如耳塞101-2)。然后,用户会关闭耳塞盒101-3。耳塞盒101-3被关闭后,耳塞盒101-3中的耳塞101-2与耳塞盒101-3外的耳塞101-1会断开连接。即TWS耳机101的两个耳塞未配对连接。
其中,两个耳塞断开连接后,耳塞盒101-3外的耳塞101-1可以向手机100指示两个耳塞断开连接。例如,耳塞101-1可以通过ACL链路1向手机100发送控制命令,以指示两个耳塞断开连接。
如果耳塞101-1和耳塞101-2配对连接,手机100则可以确定TWS耳机101处于双耳状态。即TWS耳机101的两个耳塞(耳塞101-1和耳塞101-2)一起作为手机100的音频输入/输出设备被使用。
如果耳塞101-1和耳塞101-2未配对连接,而另一个耳塞并未被佩戴,手机100 则可以确定TWS耳机101处于单耳状态。即TWS耳机101的一个耳塞(如耳塞101-1)单独作为手机100的音频输入/输出设备被使用。
需要说明的是,手机100判断TWS耳机101处于单耳状态还是双耳状态的方法包括但不限于上述方式(1)-方式(3)。例如,手机100可以通过耳塞101-1和耳塞101-2是否均与手机存在连接,判断TWS耳机101处于单耳状态还是双耳状态。其中,在一些使用场景中,用户从耳塞盒101-3中拿出耳塞101-1后,不会继续拿出耳塞101-2。然后,用户会关闭耳塞盒101-3。耳塞盒101-3被关闭后,耳塞盒101-3中的耳塞101-2与手机100断开连接。此时,手机100可以确定TWS耳机101处于单耳状态。
以第一CIS是CIS(1),第二CIS是CIS(2)为例。手机100配置第一CIG时,上述CIS(1)和CIS(2)被配置为串行调度(Sequential)的传输方式或者交织调度(Interleaved)的传输方式。其中,串行调度的传输方式和交织调度的传输方式中,配置的第一CIG的CIG参数不同。串行调度的传输方式和交织调度的传输方式的详细介绍可以参考以下实施例中的描述,本申请实施例这里不予赘述。
在一些实施例中,在图9所示的S900之后,假设TWS耳机101处于双耳状态,即TWS耳机101的两个耳塞(耳塞101-1和耳塞101-2)一起作为手机100的音频输入/输出设备被使用。本实施例中,以CIS(1)和CIS(2)被配置为交织调度的传输方式为例,对本申请实施例的方法进行说明。
例如,在图9所示的S900之后,如果手机100确定TWS耳机101处于双耳状态,手机100可以执行S901。其中,手机100可以通过执行图8所示的S801配置CIG的过程中,配置第一CIG,将CIS(1)和CIS(2)配置为图10中的(a)所示的交织调度的传输方式。具体的,如图10中的(a)或者图10中的(b)所示,CIS(1)的锚点(如CIS(1).x锚点)是第一CIG的锚点(如CIG(x)的锚点);而CIS(2)的锚点(如CIS(2).x锚点)与CIS(1)事件(x)中的第一个子事件(即子事件1_1)的结束点相同。并且,CIS(1)的子间隔(如CIS(1)_子间隔)与CIS(2)的子间隔(如CIS(2)_子间隔)不同。
随后,手机100可以通过与耳塞101-1执行S802,为耳塞101-1创建CIS(1)。手机100可以通过与耳塞101-2执行S803,为耳塞101-2创建CIS(2)。其中,手机100为耳塞101-1创建CIS(1),为耳塞101-2创建CIS(2)的步骤可以在图9所示的S901-S902之间(图9未示出)。最后,手机100可以指示耳塞101-1激活CIS(1),指示耳塞101-2激活CIS(2)(即执行图9所示的S902)。手机100可以通过ACL(1)向耳塞101-1发送激活指令,通过ACL(2)向耳塞101-2发送激活指令。该激活指令用于触发耳塞101-1激活CIS(1),触发耳塞101-2激活CIS(2)。其中,在CIS(1)和CIS(2)被激活后,手机100便可以按照图10中的(a)所示的交织调度的传输方式,与耳塞101-1和耳塞101-2传输音频数据(即执行图9所示的S903)。
在双耳状态下,手机100按照图10中的(a)所示的交织调度的传输方式,与耳塞101-1和耳塞101-2传输音频数据的方法,即图9所示的S903具体可以包括以下过程(A)至过程(D)。
过程(A):手机100从CIS(1).x锚点(即CIG(x)锚点)开始,在CIS(1)事件(x)的子事件(1_1)中的“M->S”向耳塞101-1发送音频数据(如音频数据包 1)。耳塞101-1可以在子事件(1_1)中的“M->S”接收手机100发送的音频数据(如音频数据包1)。耳塞101-1在子事件(1_1)中的“S->M”向手机100发送第一数据。手机100在子事件(1_1)中的“S->M”接收耳塞101-1发送的第一数据。该第一数据可以包括:耳塞101-1向手机100回复的反馈信息;和/或,耳塞101-1中的麦克风(如麦克风160)采集到的音频数据。上述反馈信息可以为上述音频数据包1的ACK或者NACK。
过程(B):手机100从CIS(2).x锚点开始,在CIS(2)事件(x)的子事件(2_1)中的“M->S”向耳塞101-2发送音频数据(如音频数据包1)。耳塞101-2可以在子事件(2_1)中的“M->S”接收手机100发送的音频数据(如音频数据包1)。耳塞101-2在子事件(2_1)中的“S->M”向手机100发送第二数据。手机100在子事件(2_1)中的“S->M”接收耳塞101-2发送的第二数据。该第二数据可以包括:耳塞101-2向手机100回复的反馈信息;和/或,耳塞101-2中的麦克风(如麦克风160)采集到的音频数据。上述反馈信息可以为上述音频数据包1的ACK或者NACK。
过程(C):假设手机100在子事件(1_1)中的“S->M”接收到上述音频数据包1的ACK。手机100在子事件(1_2)中的“M->S”向耳塞101-1发送音频数据(如音频数据包2)。耳塞101-1可以在子事件(1_2)中的“M->S”接收手机100发送的音频数据(如音频数据包2)。耳塞101-1在子事件(1_2)中的“S->M”向手机100发送第三数据。手机100在子事件(1_2)中的“S->M”接收耳塞101-1发送的第三数据。该第三数据可以包括:耳塞101-1向手机100回复的反馈信息;和/或,耳塞101-1中的麦克风(如麦克风160)采集到的音频数据。上述反馈信息可以为上述音频数据包2的ACK或者NACK。
过程(D):假设手机100在子事件(2_1)中的“S->M”接收到上述音频数据包1的ACK。手机100在CIS(2)事件(x)的子事件(2_2)中的“M->S”向耳塞101-2发送音频数据(如音频数据包2)。耳塞101-2可以在子事件(2_2)中的“M->S”接收手机100发送的音频数据(如音频数据包2)。耳塞101-2在子事件(2_2)中的“S->M”向手机100发送第四数据。手机100在子事件(2_2)中的“S->M”接收耳塞101-2发送的第四数据。该第四数据可以包括:耳塞101-2向手机100回复的反馈信息;和/或,耳塞101-2中的麦克风(如麦克风160)采集到的音频数据。上述反馈信息可以为上述音频数据包2的ACK或者NACK。
进一步的,手机100与TWS耳机101的左右耳塞可以在CIG事件(x+n)中采用与CIG事件(x)相同的传输方式进行音频数据传输。n大于或者等于1,且n为整数。其中,手机100与TWS耳机101的左右耳塞在CIG事件(x+n)中进行音频数据传输的方法,可以参考在CIG事件(x)中进行音频数据传输的方法,本申请实施例这里不予赘述。
其中,在手机100与耳塞101-1和耳塞101-2执行S903的过程中,TWS耳机101可能会发生单双耳切换,即TWS耳机101可能会由双耳状态切换为单耳状态。即手机100可以执行图9所示的S904。
示例性的,手机100可以通过以下方式(I)-方式(IV)判断TWS耳机101由双耳状态切换为单耳状态。即手机100可以通过以下方式(I)-方式(IV)执行图9所 示的S904:
方式(I):耳塞101-1或者耳塞101-2是否被放入耳塞盒101-3。
在上述方式(1)中,如果耳塞101-1和耳塞101-2都拿出耳塞盒101-3,手机100确定TWS耳机101处于双耳状态。方式(I)与上述方式(1)对应,在双耳状态下,手机100可以通过判断耳塞101-1或者耳塞101-2是否被放入耳塞盒101-3,确定是否切换为单耳状态。例如,手机100可以在耳塞101-1或者耳塞101-2中任一耳塞被放入耳塞盒101-3时,确定TWS耳机101由双耳状态切换为单耳状态。即TWS耳机101的一个耳塞(如耳塞101-2)单独作为手机100的音频输入/输出设备被使用。
其中,耳塞可以通过传感器(如光传感器或者接触传感器等)或者电连接器检测到耳塞被放入耳塞盒101-3。耳塞被放入耳塞盒101-3后,可以向手机100指示该耳塞被放入耳塞盒101-3。例如,以耳塞101-2为例。耳塞101-2可以通过ACL链路2向手机100发送控制命令,以指示该耳塞101-2被放入耳塞盒101-3。
方式(II):耳塞101-1或者耳塞101-2是否由佩戴状态切换为未佩戴状态。
在上述方式(2)中,如果耳塞101-1和耳塞101-2都被佩戴,手机100确定TWS耳机101处于双耳状态。方式(II)与上述方式(2)对应,在双耳状态下,手机100可以通过判断耳塞101-1或者耳塞101-2是否被佩戴,确定是否切换为单耳状态。例如,手机100可以在耳塞101-1或者耳塞101-2中任一耳塞未被佩戴时,确定TWS耳机101由双耳状态切换为单耳状态。即TWS耳机101的一个耳塞(如耳塞101-2)单独作为手机100的音频输入/输出设备被使用。当然,如果两个耳塞都没有由佩戴状态切换为未佩戴状态,TWS耳机101则不会由双耳状态切换为单耳状态。
示例性的,耳塞被用户从耳朵上拿下时,可以通过传感器(如光传感器或者骨传感器等),检测耳塞由佩戴状态切换为未佩戴状态。例如,假设耳塞101-1或者耳塞101-2均被佩戴。以耳塞101-2被用户从耳朵上拿下为例。耳塞101-2可以通过传感器检测到耳塞101-2由佩戴状态切换为未佩戴状态。此时,耳塞101-2可以通过ACL链路2向手机100发送控制命令,以指示该耳塞101-2由佩戴状态切换为未佩戴状态。
方式(III):耳塞101-1和耳塞101-2是否断开连接。
方式(III)与上述方式(3)对应。在双耳状态下,TWS耳机101的两个耳塞配对连接。而在单耳状态下,TWS耳机101的两个耳塞则会断开连接。因此,手机100可以通过耳塞101-1和耳塞101-2是否断开连接,来确定TWS耳机101是否由双耳状态切换为单耳状态。
例如,在一些使用场景中,用户在使用TWS耳机101的两个耳塞(耳塞101-1和耳塞101-2)的过程中,可能会因为一些原因(如一个耳塞发出低电量提醒)停止使用两个耳塞中的一个耳塞,并将其放入耳塞盒101-3。其中,耳塞(如耳塞101-1或者耳塞101-2)的电量低于预设电量阈值时,可以发出低电量提醒。例如,耳塞可以通过语音提醒或者振动的方式发出低电量提醒。耳塞(如耳塞101-1)被放入耳塞盒101-3后,便可以与另一个耳塞(如耳塞101-2)断开连接。即TWS耳机101的两个耳塞未配对连接。其中,两个耳塞断开连接后,耳塞盒101-3外的耳塞(如耳塞101-2)可以向手机100指示两个耳塞断开连接。例如,耳塞101-2可以通过ACL链路2向手机100发送控制命令,以指示两个耳塞断开连接。
如果耳塞101-1和耳塞101-2断开连接,手机100则可以确定TWS耳机101由双耳状态切换为单耳状态。即TWS耳机101的一个耳塞(如耳塞101-2)单独作为手机100的音频输入/输出设备被使用。当然,如果耳塞101-1和耳塞101-2没有断开连接,手机100则可以确定TWS耳机101不会切换为单耳状态。
方式(IV):耳塞101-1或者耳塞101-2的电量是否低于预设电量阈值。
其中,耳塞(如耳塞101-1或者耳塞101-2)的电量低于预设电量阈值时,耳塞还可以通过ACL链路向手机100发送控制命令,以指示该耳塞的电量低于预设电量阈值。如果任一个耳塞的电量低于预设电量阈值,手机100则可以确定TWS耳机101由双耳状态切换为单耳状态。如果两个耳塞的电量都没有低于预设电量阈值(即手机100没有接收到两个耳塞发送的指示电量低于预设电量阈值的控制命令),TWS耳机101则不会切换为单耳状态。其中,方式(IV)可以对应于上述方式(1)-方式(3)中的任一种实现方式。
需要说明的是,手机100判断TWS耳机101是否由双耳状态切换为单耳状态的方法包括但不限于上述方式(I)-方式(IV)。例如,手机100可以通过耳塞101-1或者耳塞101-2是否与手机断开连接,判断TWS耳机101是否由双耳状态切换为单耳状态。其中,在一些使用场景中,用户可能会将一个耳塞戴在耳朵上自己用,将另一个耳塞交给其他用户使用。但是,在使用的过程中,该用户或者其他用户可能会发生位置移动,用户所佩戴的耳塞也会随之发生位置移动。当任一个耳塞与手机100之间的距离较远时,手机100可能会与耳塞断开连接。如果手机100检测到一个耳塞(如耳塞101-1)与手机100断开连接,手机100则可以确定TWS耳机101由双耳状态切换为单耳状态。
在上述方式(I)-方式(IV)中,如果TWS耳机101不切换为单耳状态,手机100则可以继续采用交织调度的传输方式,通过CIS(1)和CIS(2)与TWS耳机101的两个耳塞传输音频数据。即如图9所示,如果TWS耳机101不切换为单耳状态,手机100与TWS耳机101的左右耳塞可以继续执行S903。
在上述方式(I)-方式(IV)中,如果TWS耳机101由双耳状态切换为单耳状态(如第二单耳状态),例如在第二单耳状态使用耳塞101-2,不使用耳塞101-1,手机100则可以执行图9所示的S905,去激活CIS(1)。其中,手机100去激活CIS(1)后,可以停止通过CIS(1)与耳塞101-1传输音频数据。并且,手机100可以继续采用图10中的(a)所示的交织调度的传输方式,通过CIS(2)与耳塞101-2传输音频数据(即执行图9所示的S906)。
其中,在切换为单耳状态(如第二单耳状态)后,手机100按照图10中的(a)所示的交织调度的传输方式,与耳塞101-2传输音频数据的方法,即图9所示的S906具体可以包括上述过程(B)和过程(D),不包括上述过程(A)和过程(C)。换言之,手机100可以只在图10中的(a)所示子事件(2_1)中的“M->S”向耳塞101-2发送音频数据,在子事件(2_1)中的“S->M”接收耳塞101-2发送的音频数据;图10中的(a)所示子事件(2_2)中的“M->S”向耳塞101-2发送音频数据,在子事件(2_2)中的“S->M”接收耳塞101-2发送的音频数据。而不会继续在子事件(1_1)和子事件(1_2)中与耳塞101-1传输音频数据。
当然,TWS耳机101由双耳状态切换为单耳状态,也可能是切换为第一单耳状态。 在第一单耳状态下,使用耳塞101-1,不使用耳塞101-2。在这种情况下,手机100可以去激活CIS(2)。其中,手机100去激活CIS(2)后,可以停止通过CIS(2)与耳塞101-2传输音频数据。并且,手机100可以继续采用图10中的(a)所示的交织调度的传输方式,通过CIS(1)与耳塞101-1传输音频数据(图9未示出)。在这种情况下,手机100可以与耳塞101-1执行上述过程(A)和过程(C),手机100不会继续与耳塞101-2执行上述过程(B)和过程(D)。
需要强调的是,在本申请实施例中,对手机100而言,CIS被激活后,手机100才可以在激活的CIS上传输音频数据(在对应CIS的“M->S”发送音频数据,“S->M”接收音频数据)。同样的,对于耳塞而言,CIS被激活后,手机100才可以在激活的CIS上传输音频数据(在对应CIS的“M->S”接收音频数据,发送音频数据)。手机100和耳塞都不会在去激活或者未被激活的CIS传输音频数据。
需要说明的是,如果S900之后,手机100确定TWS耳机101处于双耳状态,上述第一CIG的CIS(1)和CIS(2)也可以被配置为串行调度的传输方式。其中,串行调度的传输方式的详细介绍可以参考本申请实施例其他部分的描述,这里不予赘述。
在双耳状态下,相比于串行调度的传输方式,交织调度的传输方式的优势在于:手机100可以采用将CIS(1)的子事件(1_1)和子事件(1_2),以及CIS(2)的子事件(2_1)和子事件(2_2)在时间进行交织排布,即可以将CIS(1)的音频数据和CIS(2)的音频数据在时间进行交织排布进行传输,这样可以使不同的CIS受干扰的程度更加均等,可以提升音频数据传输的抗干扰性能。
可以理解,在TWS耳机101由双耳状态切换为第二单耳状态后,手机100与耳塞101-2执行S906的过程中,手机100可能会接收到用户的挂起操作。该挂起操作用于触发TWS耳机暂停播放音频数据。在S906之后,本申请实施例的方法还可以包括S910。
例如,上述挂起操作可以为耳塞101-2作为手机100的输出设备执行S906播放音乐的过程中,用户对手机100显示的音乐播放界面的“暂停按钮”的点击操作(如单击操作);或者,该挂起操作可以为用户对手机100的“静音按键”的开启操作。其中,“静音按键”可以是手机100的物理按键。
又例如,上述挂起操作可以为耳塞101-2作为手机100的输入/输出设备执行S906的游戏场景中,用户对手机100显示的游戏界面的“暂停按钮”的点击操作(如单击操作);或者,该挂起操作可以为用户对手机100的“静音按键”的开启操作。
又例如,上述挂起操作可以为耳塞101-2作为手机100的输入/输出设备执行S906进行语音通信的过程中,用户对手机100显示的语音通信界面的“挂断按钮”的点击操作(如单击操作)。
又例如,上述挂起操作也可以是在上述音乐播放场景、语音通信场景或者游戏场景中,用户对耳塞101-2上的预设物理按键的第一操作(如单击操作、长按操作或者双击操作)。其中,用户对该预设物理按键的第一操作用于触发耳塞101-2暂停播放和采集声音信号。可以理解,对该预设物理按键的其他操作(如第二操作)可以触发耳塞101-2执行其他事件(例如,与耳塞101-1配对连接,与耳塞101-1断开连接等)。
在上述实施例中,响应于上述挂起操作,手机100可以暂停与耳塞101-1传输音频数据。为了避免TWS耳机101由双耳状态切换为单耳状态(如第二单耳状态)后, 手机100为耳塞配置的CIS的传输方式并不适用于TWS耳机101的当前状态(即单耳状态,如第二单耳状态)。例如,手机100将CIS(1)和CIS(2)配置为交织调度的传输方式更加适用于双耳状态,可以使CIS(1)和CIS(2)受干扰的程度更加均等,可以提升音频数据传输的抗干扰性能。但是,切换为单耳状态(如第二单耳状态)后,假设耳塞101-2单独作为手机100的输入/输出设备被使用。如果仍然采用图10中的(a)所示的交织调度的传输方式进行音频数据传输,手机100只会在子事件(2_1)和子事件(2_2)与耳塞101-2传输音频数据,手机100停止在子事件(1_1)和子事件(1_2)与耳塞101-1传输音频数据。如此,子事件(2_1)和子事件(2_2)之间间隔有一段空闲时间(即子事件(1_2)),在时间上并不连续。而这段空闲时间可能会被其他传输(如Wi-Fi)占用,会加大手机100在子事件(2_1)和子事件(2_2)与耳塞101-2传输音频数据受到干扰的可能性。
基于此,响应于上述挂起操作,手机100可以重新执行S900,判断TWS耳机101当前处于单耳状态还是双耳状态,然后根据判断结果执行S901或者S911为TWS耳机101配置第一CIG。在上述实例中,由于TWS耳机101当前处于单耳状态(如第一单耳状态),手机100可以执行S911,将CIS(1)和CIS(2)配置为串行调度的传输方式。其中,手机100将CIS(1)和CIS(2)配置为串行调度的传输方式的具体方法,可以参考以下实施例其他部分的描述,这里不予赘述。
可以理解,响应于上述挂起操作,上述音频数据挂起(即停止)。手机100在音频数据挂起过程中重新配置CSI,这样在业务重新开始后,手机100便可以通过重新配置的CIS传输音频数据。如此,则不会因为重新配置CIS而导致业务中断。
进一步的,结合上述实施例,在S901-S906(双耳状态切换为单耳状态)之后,TWS耳机101还可以重新切换为双耳状态。例如,如图11所示,S901-S906之后,本申请实施例的方法还可以包括S914。在S914之后,手机100可以激活CIS(1),即执行S907。然后,如图11所示,手机100可以采用交织调度的传输方式,通过CIS(1)和CIS(2)与TWS耳机101的两个耳塞传输音频数据(即执行S903)。其中,S903之后,本申请实施例的方法还可以包括S904-S906和S910。
在另一些实施例中,在图9所示的S900之后,假设TWS耳机101处于单耳状态(如第一单耳状态),即TWS耳机101的一个耳塞(如耳塞101-1)单独作为手机100的音频输入/输出设备被使用。本实施例中,以CIS(1)和CIS(2)被配置为串行调度的传输方式为例,对本申请实施例的方法进行说明。
例如,在图9所示的S900之后,如果手机100确定TWS耳机101处于单耳状态(如第一单耳状态),手机100可以执行S911。其中,手机100可以通过执行图8所示的S801配置CIG的过程中,配置第一CIG,将CIS(1)和CIS(2)配置为图12中的(a)所示的串行调度的传输方式。具体的,如图12中的(a)或者图12中的(b)所示,CIS(1)的锚点(如CIS(1).x锚点)是第一CIG的锚点(如CIG(x)的锚点);而CIS(2)的锚点(如CIS(2).x锚点)与CIS(1)事件(x)的结束点相同。并且,CIS(1)的子间隔(如CIS(1)_子间隔)与CIS(2)的子间隔(如CIS(2)_子间隔)相同。
随后,手机100可以通过与耳塞101-1执行S802,为耳塞101-1创建CIS(1)。 手机100可以通过与耳塞101-2执行S803,为耳塞101-2创建CIS(2)。其中,手机100为耳塞101-1创建CIS(1),为耳塞101-2创建CIS(2)的步骤可以在图9所示的S911-S912之间(图9未示出)。最后,手机100可以指示耳塞101-1激活CIS(1),但不会指示耳塞101-2激活CIS(2)(即执行图9所示的S912)。其中,手机100可以通过ACL(1)向耳塞101-1发送激活指令。该激活指令用于触发耳塞101-1激活CIS(1)。手机100不会向耳塞101-2发送激活指令,从而耳塞101-2不会激活CIS(2)。其中,在CIS(1)被激活后,手机100便可以按照图12中的(a)所示的串行调度的传输方式,与耳塞101-1传输音频数据(即执行图9所示的S913)。
在单耳状态(如第一单耳状态)下,手机100按照图12中的(a)所示的串行调度的传输方式,与耳塞101-1传输音频数据的方法,即图9所示的S913具体可以包括以下过程(a)至过程(b)。
过程(a):手机100从CIS(1).x锚点(即CIG(x)锚点)开始,在CIS(1)事件(x)的子事件(1_1)中的“M->S”向耳塞101-1发送音频数据(如音频数据包1)。耳塞101-1可以在子事件(1_1)中的“M->S”接收手机100发送的音频数据(如音频数据包1)。耳塞101-1在子事件(1_1)中的“S->M”向手机100发送第一数据。手机100在子事件(1_1)中的“S->M”接收耳塞101-1发送的第一数据。该第一数据可以包括:耳塞101-1向手机100回复的反馈信息;和/或,耳塞101-1中的麦克风(如麦克风160)采集到的音频数据。上述反馈信息可以为上述音频数据包1的ACK或者NACK。
过程(b):假设手机100在子事件(1_1)中的“S->M”接收到上述音频数据包1的ACK。手机100在子事件(1_2)中的“M->S”向耳塞101-1发送音频数据(如音频数据包2)。耳塞101-1可以在子事件(1_2)中的“M->S”接收手机100发送的音频数据(如音频数据包2)。耳塞101-1在子事件(1_2)中的“S->M”向手机100发送第三数据。手机100在子事件(1_2)中的“S->M”接收耳塞101-1发送的第三数据。该第三数据可以包括:耳塞101-1向手机100回复的反馈信息;和/或,耳塞101-1中的麦克风(如麦克风160)采集到的音频数据。上述反馈信息可以为上述音频数据包2的ACK或者NACK。
需要注意的是,在单耳状态(如第一单耳状态,即耳塞101-1作为手机100的输入/输出设备被使用时),手机100不会在图12中的(a)所示的子事件(2_1)和子事件(2_2)中与耳塞101-2传输音频数据。即图9所示的S913不包括以下过程(c)和过程(d)。
进一步的,手机100与耳塞101-1可以在CIG事件(x+n)中采用与CIG事件(x)相同的传输方式进行音频数据传输。n大于或者等于1,且n为整数。其中,手机100与耳塞101-1在CIG事件(x+n)中进行音频数据传输的方法,可以参考在CIG事件(x)中进行音频数据传输的方法,本申请实施例这里不予赘述。
其中,在手机100与耳塞101-1执行S913的过程中,TWS耳机101可能会发生单双耳切换,即TWS耳机101可能会由单耳状态(如第一单耳状态)切换为双耳状态。即手机100可以执行图9所示的S914。
示例性的,手机100可以通过以下方式(i)-方式(iii)判断TWS耳机101由单 耳状态切换为双耳状态。即手机100可以通过以下方式(i)-方式(iii)执行图9所示的S914:
方式(i):耳塞101-2被拿出耳塞盒101-3。
在上述方式(1)中,如果耳塞101-1和耳塞101-2都拿出耳塞盒101-3,手机100确定TWS耳机101处于双耳状态;如果一个耳塞(如耳塞101-1)被拿出耳塞盒101-3,而另一个耳塞(如耳塞101-2)未被拿出耳塞盒101-3,手机100确定TWS耳机101处于单耳状态。方式(i)与上述方式(1)对应,在单耳状态下,手机100可以通过判断耳塞101-2是否被拿出耳塞盒101-3,确定是否切换为双耳状态。例如,手机100可以在耳塞101-2被拿出耳塞盒101-3后,确定TWS耳机101由单耳状态切换为双耳状态。即TWS耳机101的两个耳塞(如耳塞101-2)一起作为手机100的音频输入/输出设备被使用。
方式(ii):耳塞101-2是否由未佩戴状态切换为佩戴状态。
在上述方式(2)中,如果耳塞101-1和耳塞101-2都被佩戴,手机100确定TWS耳机101处于双耳状态;如果只有一个耳塞(如耳塞101-1)被佩戴,手机100确定TWS耳机101处于单耳状态。方式(ii)与上述方式(2)对应,在单耳状态下,手机100可以通过判断耳塞101-2是否由未佩戴状态切换为佩戴状态,确定是否切换为双耳状态。例如,在耳塞101-1已经被佩戴的情况下,手机100可以在耳塞101-2被佩戴时,确定TWS耳机101由单耳状态切换为双耳状态。即TWS耳机101的两个耳塞(如耳塞101-2)一起作为手机100的音频输入/输出设备被使用。
方式(iii):耳塞101-1和耳塞101-2配对连接。
方式(iii)与上述方式(3)对应。在单耳状态下,TWS耳机101的两个耳塞未配对连接或者断开连接。因此,手机100可以通过耳塞101-1和耳塞101-2是否配对连接,来确定TWS耳机101是否由单耳状态切换为双耳状态。
需要说明的是,手机100判断TWS耳机101是否由单耳状态切换为双耳状态的方法包括但不限于上述方式(i)-方式(iii)。例如,在单耳状态(耳塞101-1已经与手机100建立连接)中,手机100可以通过判断耳塞101-2是否与手机建立连接,判断TWS耳机101是否由单耳状态切换为双耳状态。
在上述方式(i)-方式(iii)中,如果TWS耳机101不切换为双耳状态,手机100则可以继续采用串行调度的传输方式,通过CIS(1)与耳塞101-1传输音频数据。即如图9所示,如果TWS耳机101不切换为双耳状态,手机100与TWS耳机101的左右耳塞可以继续执行S913。
在上述方式(i)-方式(iii)中,如果TWS耳机101由单耳状态切换为双耳状态,手机100则可以执行图9所示的S915,激活CIS(2)。其中,手机100激活CIS(2)后,可以通过CIS(2)与耳塞101-2传输音频数据。并且,手机100可以继续采用图12中的(a)所示的串行调度的传输方式,通过CIS(1)与耳塞101-1传输音频数据(即执行图9所示的S916)。
其中,在切换为双耳状态后,手机100按照图12中的(a)所示的串行调度的传输方式,与TWS耳机的两个耳塞传输音频数据的方法,即图9所示的S916具体可以包括上述过程(a)和过程(b),以及以下过程(c)和过程(d)。
过程(c):手机100从CIS(2).x锚点开始,在CIS(2)事件(x)的子事件(2_1)中的“M->S”向耳塞101-2发送音频数据(如音频数据包1)。耳塞101-2可以在子事件(2_1)中的“M->S”接收手机100发送的音频数据(如音频数据包1)。耳塞101-2在子事件(2_1)中的“S->M”向手机100发送第二数据。手机100在子事件(2_1)中的“S->M”接收耳塞101-2发送的第二数据。该第二数据可以包括:耳塞101-2向手机100回复的反馈信息;和/或,耳塞101-2中的麦克风(如麦克风160)采集到的音频数据。上述反馈信息可以为上述音频数据包1的ACK或者NACK。
过程(d):假设手机100在子事件(2_1)中的“S->M”接收到上述音频数据包1的ACK。手机100在CIS(2)事件(x)的子事件(2_2)中的“M->S”向耳塞101-2发送音频数据(如音频数据包2)。耳塞101-2可以在子事件(2_2)中的“M->S”接收手机100发送的音频数据(如音频数据包2)。耳塞101-2在子事件(2_2)中的“S->M”向手机100发送第四数据。手机100在子事件(2_2)中的“S->M”接收耳塞101-2发送的第四数据。该第四数据可以包括:耳塞101-2向手机100回复的反馈信息;和/或,耳塞101-2中的麦克风(如麦克风160)采集到的音频数据。上述反馈信息可以为上述音频数据包2的ACK或者NACK。
换言之,TWS耳机101由单耳状态切换为双耳状态后,手机100可以在图12中的(a)所示的子事件(1_1)和子事件(1_2)中与耳塞101-1传输音频数据,在图12中的(a)所示的子事件(2_1)和子事件(2_2)中与耳塞101-1传输音频数据。
需要说明的是,如果S900之后,手机100确定TWS耳机101处于单耳状态,上述第一CIG的CIS(1)和CIS(2)也可以被配置为交织调度的传输方式。其中,交织调度的传输方式的详细介绍可以参考本申请实施例其他部分的描述,这里不予赘述。
在单耳状态下,相比于交织调度的传输方式,串行调度的传输方式的优势在于:手机100可以在连续的时间(如子事件(1_1)和子事件(1_2)在时间上连续)与一个耳塞(如耳塞101-1)传输音频数据。这样,可以降低CIS受干扰的程度,可以提升音频数据传输的抗干扰性能。
并且,在单耳状态下,采用串行调度的传输方式,可以将较长的连续时间(如子事件(2_1)和子事件(2_2)对应的时间)空闲出来留给其他传输(如Wi-Fi)使用。这样,可以减少Wi-Fi与蓝牙频繁切换使用传输资源而带来的相互干扰。
可以理解,在TWS耳机101由单耳状态切换为双耳状态后,手机100与耳塞101-2执行S906的过程中,手机100可能会接收到用户的挂起操作。该挂起操作的详细描述可以参考上述实施例中的相关内容,本申请实施例这里不予赘述。响应于上述挂起操作,手机100可以暂停与耳塞101-1和耳塞101-2传输音频数据。为了减少TWS耳机101由单耳状态切换为双耳状态后,手机100为耳塞配置的CIS的传输方式并不适用于当前场景(如双耳状态)。例如,手机100将CIS(1)和CIS(2)配置为串行调度的传输方式更加适用于单耳状态,可以在连续的时间与一个耳塞传输音频数据,提升了音频数据传输的抗干扰性能。但是,切换为双耳状态后,如果仍然采用图12中的(a)所示的串行调度的传输方式进行音频数据传输,手机100在CIS(1)事件(x)(即子事件(1_1)和子事件(1_2))与耳塞101-1传输完音频数据后,才会在CIS(2)事件(x)(即子事件(2_1)和子事件(2_2))与耳塞101-2传输音频数据。相比于 交织调度的传输方式,CIS(1)与CIS(2)受到干扰的程度可能会存在较大差异。
基于此,响应于上述挂起操作,手机100可以重新执行S900,判断TWS耳机101当前处于单耳状态还是双耳状态,然后根据判断结果执行S901或者S911为TWS耳机101配置第一CIG。在上述实例中,由于TWS耳机101当前处于双耳状态,手机100可以执行S911,将CIS(1)和CIS(2)配置为交织调度的传输方式。其中,手机100将CIS(1)和CIS(2)配置为交织调度的传输方式的具体方法,可以参考上述实施例中的描述,这里不予赘述。
需要注意的是,一般而言,手机100配置好CIS(即配置好音频数据的传输方式)之后,无论TWS耳机101的状态如何切换(如由单耳状态切换为双耳状态,或者由双耳状态切换为单耳状态),手机100都会按照配置好的传输方式(如串行调度的传输方式或者交织调度的传输方式)与TWS耳机101传输音频数据,直至音频数据结束。也就是说,音频数据的传输方式不会发生变化,音频数据也不会因为TWS耳机101的单双耳切换而中断。
只有在手机100接收到上述挂起操作,上述音频数据挂起(即停止)时,手机100才可以在音频数据挂起过程中重新配置CSI。这样在业务重新开始后,手机100便可以通过重新配置的CIS传输音频数据。如此,则不会因为重新配置CIS而导致业务中断。并且,重新配置的传输方式更加适用于TWS耳机101的当前状态(如单耳状态或者双耳状态),可以提升音频数据的传输效率。
进一步的,结合上述实施例,在S911-S916(单耳状态切换为双耳状态)之后,TWS耳机101还可以重新切换为单耳状态。例如,如图13所示,S911-S916之后,本申请实施例的方法还可以包括S904。在S904之后,手机100可以去激活CIS(2),即执行S917。然后,如图13所示,手机100可以采用串行调度的传输方式,通过CIS(1)与耳塞101-1传输音频数据(即执行S913)。其中,S913之后,本申请实施例的方法还可以包括S914-S916和S910。
在一些实施例中,双耳状态下,手机100向耳塞101-1和耳塞101-2发送的音频数据可以不同。以耳塞101-1是左耳塞,耳塞101-2是右耳塞为例。手机100向耳塞101-1发送左声道的音频数据,向耳塞101-2发送右声道的音频数据。耳塞101-1播放左声道的音频数据,耳塞101-2播放右声道的音频数据。即耳塞101-1和耳塞101-2结合起来播放立体声音频数据。在这种情况(简称情况1)下,手机100可以分别对需发送给左右耳塞的音频数据进行编码处理(即左右声道编码)。
在另一些实施例中,双耳状态下,手机100向耳塞101-1和耳塞101-2发送的音频数据可以相同。手机100向耳塞101-1和耳塞101-2发送的音频数据都是单声道的音频数据。耳塞101-1和耳塞101-2可以播放单声道的音频数据。在这种情况(简称情况2)下,手机100可以对需发送给左右耳塞的音频数据均进行单声道编码。
而在单耳状态下,为了提升用户的收听体验,手机100则不能采用左右声道编码,不能向正在使用的耳塞发送经过左声道编码或者右声道编码的音频数据。手机100可以对音频数据进行单声道编码,耳塞可以播放单声道的音频数据。
在上述情况(1)中,如果TWS耳机101由双耳状态切换为单耳状态,手机100需要将编码方式由左右声道编码切换为单声道编码。同样的,如果TWS耳机101由单 耳状态切换为上述情况(1)对应的双耳状态,手机100需要将编码方式由单声道编码切换为左右声道编码。
而在上述情况(2)中,如果TWS耳机101发生单双耳切换,例如,由双耳状态切换为单耳状态,或者,由单耳状态切换为双耳状态,手机100则不需要改变编码方式。
在上述情况(2)对应的双耳状态下,采用上述串行调度或者交织调度的传输方式,手机100在不同的时间可以向TWS耳机101的左右耳塞分别传输相同的音频数据。例如,手机100在图10中的(a)或者图12中的(a)所示的子事件(1_1)中的“M->S”向耳塞101-1传输音频数据包1。手机100在图10中的(a)或者图12中的(a)所示的子事件(2_1)中的“M->S”向耳塞101-2传输音频数据包1。其中,手机100在不同时间段重复传输相同的音频数据,会导致对传输资源的浪费,降低了传输资源的有效利用率。
为了提升传输资源的有效利用率,在另一些实施例中,手机100配置第一CIG时,可以将上述CIS(1)和CIS(2)被配置为联合调度的传输方式。
例如,假设TWS耳机101处于双耳状态,即TWS耳机101的两个耳塞(耳塞101-1和耳塞101-2)一起作为手机100的音频输入/输出设备被使用。本实施例中,以CIS(1)和CIS(2)被配置为联合调度的传输方式为例,对本申请实施例的方法进行说明。
例如,如果手机100确定TWS耳机101处于双耳状态,手机100可以配置第一CIG,将CIS(1)和CIS(2)配置为图14中的(a)所示的联合调度的传输方式。具体的,如图14中的(a)或者图14中的(b)所示,CIS(1)的锚点(如CIS(1).x锚点)和CIS(2)的锚点(如CIS(2).x锚点)都是第一CIG的锚点(如CIG(x)的锚点)。并且,如图14中的(a)所示,CIS(1)的子间隔(如CIS(1)_子间隔)与CIS(2)的子间隔(如CIS(2)_子间隔)相同。
随后,手机100可以为耳塞101-1创建CIS(1),为耳塞101-2创建CIS(2)。最后,手机100可以指示耳塞101-1激活CIS(1),指示耳塞101-2激活CIS(2)。其中,在CIS(1)和CIS(2)被激活后,手机100便可以按照图14中的(a)所示的联合调度的传输方式,与耳塞101-1和耳塞101-2传输音频数据。
在双耳状态下,手机100按照图14中的(a)所示的联合调度的传输方式,与耳塞101-1和耳塞101-2传输音频数据的方法可以包括以下过程(一)至过程(六)。
过程(一):手机100从CIS(1).x锚点(即CIS(2).x锚点)开始,以跳频的方式在CIS(1)事件(x)的子事件(1_1)和CIS(2)事件(x)的子事件(2_1)中的“M->S”(即图14中的(a)中加粗的“M->S”)发送音频数据(如音频数据包1)。耳塞101-1可以在图14中的(a)所示的子事件(1_1)中的“M->S”(即加粗的“M->S”)以跳频的方式接收手机100发送的音频数据包1。耳塞101-2可以在图14中的(a)所示的子事件(2_1)中的“M->S”(即加粗的“M->S”)以跳频的方式接收手机100发送的音频数据包1。
过程(二):耳塞101-1可以在子事件(1_1)中的“S->M”(实线未加粗的“S->M”)向手机100发送第一数据。手机100可以在子事件(1_1)中的“S->M”接收耳塞101-1 发送的第一数据。
过程(三):耳塞101-2可以在子事件(2_1)中的“S->M”(虚线的“S->M”)向手机100发送第三数据。手机100可以在子事件(2_1)中的“S->M”接收耳塞101-2发送的第三数据。
过程(四):手机100以跳频的方式在子事件(1_2)和子事件(2_2)中的“M->S”(即加粗的“M->S”)发送音频数据(如音频数据包2)。耳塞101-1可以在图14中的(a)所示的子事件(1_2)中的“M->S”以跳频的方式接收手机100发送的音频数据包2。耳塞101-2可以在图14中的(a)所示的子事件(2_2)中的“M->S”以跳频的方式接收手机100发送的音频数据包2。
过程(五):耳塞101-1可以在子事件(1_2)中的“S->M”(实线未加粗的“S->M”)向手机100发送第二数据。手机100可以在子事件(1_2)中的“S->M”接收耳塞101-1发送的第二数据。
过程(六):耳塞101-2可以在子事件(2_2)中的“S->M”(虚线的“S->M”)向手机100发送第四数据。手机100可以在子事件(2_2)中的“S->M”接收耳塞101-2发送的第四数据。
其中,如果TWS耳机101由双耳状态切换为单耳状态(例如在单耳状态使用耳塞101-2,不使用耳塞101-1),手机100则可以去激活CIS(1)。其中,手机100去激活CIS(1)后,可以停止通过CIS(1)与耳塞101-1传输音频数据。并且,手机100可以继续采用图14中的(a)所示的联合调度的传输方式,通过CIS(2)与耳塞101-2传输音频数据。
具体的,在切换为单耳状态后,手机100按照图14中的(a)所示的联合调度的传输方式与耳塞101-2传输音频数据的方法可以包括上述过程(一)、过程(三)、过程(四)和过程(六),不包括过程(二)和过程(五)。并且,过程(一)和过程(四)中,耳塞101-1不会在图14中的(a)所示的“M->S”(即加粗的“M->S”)以跳频的方式接收手机100发送的音频数据包。
本申请实施例中,手机100可以以跳频的方式在同一时间点(即CIS(1).x锚点和CIS(2).x锚点,CIS(1).x锚点和CIS(2).x锚点相同)发送音频数据包。这样,TWS耳机201的左右耳塞也可以以跳频的方式在同一“M->S”接收音频数据包。这样,手机100则不会在不同时间段重复传输相同的音频数据,可以降低对传输资源的浪费,提升传输资源的有效利用率。
本申请另一些实施例还提供了一种电子设备,该电子设备可以包括一个或多个处理器;存储器;以及一个或多个计算机程序,上述各器件可以通过一个或多个通信总线连接。其中该一个或多个计算机程序被存储在上述存储器中,并被配置为被该一个或多个处理器执行,该一个或多个计算机程序包括指令,上述指令可以用于执行如图8、图9、图10、图11、图12、图13或图14中任一附图对应的描述中手机100执行的各个功能或者步骤。其中,该电子设备的结构可以参考图6A所示的电子设备100的结构。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模 块,以完成以上描述的全部或者部分功能。上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本实施例所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本实施例的具体实施方式,但本实施例的保护范围并不局限于此,任何在本实施例揭露的技术范围内的变化或替换,都应涵盖在本实施例的保护范围之内。因此,本实施例的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种音频数据传输方法,其特征在于,用于电子设备与真无线立体声TWS耳机的音频数据传输,所述TWS耳机包括第一耳塞和第二耳塞,所述方法包括:
    所述电子设备通过第一基于连接的等时流组CIG的第一等时音频流CIS与所述第一耳塞传输音频数据,通过所述第一CIG的所述第二CIS与所述第二耳塞传输音频数据;
    所述电子设备确定所述TWS耳机由双耳状态切换为第一单耳状态;其中,所述双耳状态为所述第一耳塞和所述第二耳塞一起作为所述电子设备的音频输入/输出设备被使用的状态,所述第一单耳状态为所述第一耳塞单独作为所述电子设备的音频输入/输出设备被使用的状态;
    响应于所述确定,所述电子设备去激活所述第二CIS,停止通过所述第二CIS与所述第二耳塞传输音频数据,并继续通过所述第一CIS与所述第一耳塞传输音频数据。
  2. 根据权利要求1所述的方法,其特征在于,在所述响应于所述确定,所述电子设备去激活所述第二CIS,停止通过所述第二CIS与所述第二耳塞传输音频数据,继续通过所述第一CIS与所述第一耳塞传输音频数据之后,所述方法还包括:
    所述电子设备确定所述TWS耳机由所述第一单耳状态切换为所述双耳状态;
    响应于确定所述TWS耳机由所述第一单耳状态切换为所述双耳状态,所述电子设备继续通过所述第一CIS与所述第一耳塞传输音频数据,并激活所述第二CIS,通过所述第二CIS与所述第二耳塞传输音频数据。
  3. 根据权利要求2所述的方法,其特征在于,在所述响应于确定所述TWS耳机由所述第一单耳状态切换为所述双耳状态,所述电子设备继续通过所述第一CIS与所述第一耳塞传输音频数据,并激活所述第二CIS,通过所述第二CIS与所述第二耳塞传输音频数据之后,所述方法还包括:
    所述电子设备确定所述TWS耳机由所述双耳状态切换为第二单耳状态,所述第二单耳状态为所述第二耳塞单独作为所述电子设备的音频输入/输出设备被使用的状态;
    响应于确定所述TWS耳机由所述双耳状态切换为所述第二单耳状态,所述电子设备去激活所述第一CIS,停止通过所述第一CIS与所述第一耳塞传输音频数据,并继续通过所述第二CIS与所述第二耳塞传输音频数据。
  4. 根据权利要求1-3中任意一项所述的方法,其特征在于,在所述电子设备通过第一CIG的第一CIS与所述第一耳塞传输音频数据之前,所述方法还包括:
    所述电子设备确定所述TWS耳机处于所述双耳状态;
    所述电子设备为所述TWS耳机配置所述第一CIG,所述第一CIG包括所述第一CIS和所述第二CIS;
    所述电子设备为所述第一耳塞配置所述第一CIS,为所述第二耳塞配置所述第二CIS;
    所述电子设备激活所述第一CIS和所述第二CIS。
  5. 根据权利要求1-4中任意一项所述的方法,其特征在于,在所述TWS耳机处于所述双耳状态的情况下,所述电子设备配置所述第一CIG时,所述第一CIS的锚点是所述第一CIG的CIG锚点,所述第二CIS的锚点与所述第一CIS的CIS事件中的第 一个子事件的结束点相同,所述第一CIS的第二个子事件的起始点是所述第二CIS的第一个子事件的结束点;
    其中,所述第一CIS和所述第二CIS均包括多个CIS事件;所述第一CIG包括多个CIG事件;每个CIG事件包括所述第一CIS的一个CIS事件和所述第二CIS的一个CIS事件;所述第一CIS的每个CIS事件中包括N1个子事件,所述N1大于或者等于2;所述第二CIS的每个CIS事件中包括N2个子事件,所述N2大于或者等于2;
    其中,所述电子设备从所述第一CIS的锚点开始通过所述第一CIS与所述第一耳塞传输音频数据,所述电子设备从所述第二CIS的锚点开始通过所述第二CIS与所述第二耳塞传输音频数据。
  6. 根据权利要求1-4中任意一项所述的方法,其特征在于,在所述TWS耳机处于所述双耳状态的情况下,所述电子设备配置所述第一CIG时,所述第一CIS的锚点和所述第二CIS的锚点均为所述第一CIG的CIG锚点;
    其中,所述第一CIG包括多个CIG事件;所述第一CIG的CIG锚点是所述CIG事件的开始时间点;
    其中,所述电子设备从所述第一CIS的锚点开始通过所述第一CIS与所述第一耳塞传输音频数据,所述电子设备从所述第二CIS的锚点开始通过所述第二CIS与所述第二耳塞传输音频数据。
  7. 根据权利要求1-6中任意一项所述的方法,其特征在于,在所述电子设备确定所述TWS耳机由双耳状态切换为第一单耳状态之后,所述方法还包括:
    所述电子设备接收用户的挂起操作,所述挂起操作用于触发所述TWS耳机暂停播放音频数据;
    响应于所述挂起操作,所述电子设备确定所述TWS耳机当前处于所述第一单耳状态,为所述TWS耳机重新配置第一CIG,重配的第一CIG包括重配的第一CIS和重配的第二CIS;
    所述电子设备为所述第一耳塞配置所述重配的第一CIS,并激活所述重配的第一CIS,通过所述重配的第一CIS从所述重配的第一CIS的锚点开始与所述第一耳塞传输音频数据;
    其中,所述重配的第二CIS在所述第一单耳状态下不被激活。
  8. 根据权利要求7所述的方法,其特征在于,在所述TWS耳机当前处于所述第一单耳状态的情况下,所述电子设备配置所述重配的第一CIG时,所述重配的第一CIS的锚点是所述重配的第一CIG的CIG锚点,所述重配的第二CIS的锚点与所述重配的第一CIS的CIS事件的结束点相同;
    其中,所述重配的第一CIS和所述重配的第二CIS均包括多个CIS事件;所述重配的第一CIG包括多个CIG事件;每个CIG事件包括所述重配的第一CIS的一个CIS事件和所述重配的第二CIS的一个CIS事件;所述重配的第一CIG的CIG锚点是所述CIG事件的开始时间点;
    所述电子设备从所述重配的第一CIS的锚点开始通过所述重配的第一CIS与所述第一耳塞传输音频数据,所述电子设备从所述重配的第二CIS的锚点开始通过所述重配的第二CIS与所述第二耳塞传输音频数据。
  9. 一种音频数据传输方法,其特征在于,用于电子设备与真无线立体声TWS耳机的音频数据传输,所述TWS耳机包括第一耳塞和第二耳塞,所述方法包括:
    所述电子设备确定所述TWS耳机处于第一单耳状态,所述第一单耳状态是所述第一耳塞单独作为所述电子设备的音频输入/输出设备被使用的状态;
    响应于所述确定,所述电子设备为所述第一耳塞配置第一基于连接的等时流组CIG,所述第一CIG包括第一等时音频流CIS和第二CIS;
    所述电子设备为所述第一耳塞配置所述第一CIS,并激活所述第一CIS,通过所述第一CIS与所述第一耳塞传输音频数据;所述第二CIS在所述第一单耳状态下处于不激活状态。
  10. 根据权利要求9所述的方法,其特征在于,在所述电子设备为所述第一耳塞配置所述第一CIS,并激活所述第一CIS,通过所述第一CIS与所述第一耳塞传输音频数据之后,所述方法还包括:
    所述电子设备确定所述TWS耳机由所述第一单耳状态切换为双耳状态,所述双耳状态为所述第一耳塞和所述第二耳塞一起作为所述电子设备的音频输入/输出设备被使用的状态;
    响应于确定所述TWS耳机由所述第一单耳状态切换为所述双耳状态,所述电子设备激活所述第二CIS,通过所述第二CIS与所述第二耳塞传输音频数据,并继续通过所述第一CIS与所述第一耳塞传输音频数据。
  11. 根据权利要求10所述的方法,其特征在于,在所述响应于确定所述TWS耳机由所述第一单耳状态切换为所述双耳状态,所述电子设备激活所述第二CIS,通过所述第二CIS与所述第二耳塞传输音频数据,并继续通过所述第一CIS与所述第一耳塞传输音频数据之后,所述方法还包括:
    所述电子设备确定所述TWS耳机由所述双耳状态切换为第二单耳状态,所述第二单耳状态为所述第二耳塞单独作为所述电子设备的音频输入/输出设备被使用的状态;
    响应于确定所述TWS耳机由所述双耳状态切换为所述第二单耳状态,所述电子设备去激活所述第一CIS,停止通过所述第一CIS与所述第一耳塞传输音频数据,并继续通过所述第二CIS与所述第二耳塞传输音频数据。
  12. 根据权利要求9-11中任意一项所述的方法,其特征在于,在所述TWS耳机处于所述第一单耳状态或者第二单耳状态的情况下,所述第一CIS的锚点是所述第一CIG的CIG锚点,所述第二CIS的锚点与所述第一CIS的CIS事件的结束点相同;所述第二单耳状态为所述第二耳塞单独作为所述电子设备的音频输入/输出设备被使用的状态;
    其中,所述第一CIS和所述第二CIS均包括多个CIS事件;所述第一CIG包括多个CIG事件;每个CIG事件包括所述第一CIS的一个CIS事件和所述第二CIS的一个CIS事件;所述第一CIG的CIG锚点是所述CIG事件的开始时间点;
    其中,所述电子设备从所述第一CIS的锚点开始通过所述第一CIS与所述第一耳塞传输音频数据,所述电子设备从所述第二CIS的锚点开始通过所述第二CIS与所述第二耳塞传输音频数据。
  13. 根据权利要求10-12中任意一项所述的方法,其特征在于,在所述响应于确 定所述TWS耳机由所述第一单耳状态切换为所述双耳状态,所述电子设备激活所述第二CIS,通过所述第二CIS与所述第二耳塞传输音频数据,并继续通过所述第一CIS与所述第一耳塞传输音频数据之后,所述方法还包括:
    所述电子设备接收用户的挂起操作,所述挂起操作用于触发所述TWS耳机暂停播放音频数据;
    响应于所述挂起操作,所述电子设备确定所述TWS耳机当前处于所述双耳状态,为所述TWS耳机重新配置第一CIG,重配的第一CIG包括重配的第一CIS和重配的第二CIS;
    所述电子设备为所述第一耳塞配置所述重配的第一CIS,为所述第二耳塞配置所述重配的第二CIS;
    所述电子设备激活所述重配的第一CIS和所述重配的第二CIS;
    所述电子设备通过所述重配的第一CIS与所述第一耳塞传输音频数据,通过所述重配的第二CIS与所述第二耳塞传输音频数据。
  14. 根据权利要求13所述的方法,其特征在于,在所述TWS耳机当前处于所述双耳状态的情况下,所述电子设备配置所述重配的第一CIG时,所述重配的第一CIS的锚点是所述重配的第一CIG的CIG锚点,所述重配的第二CIS的锚点与所述重配的第一CIS的CIS事件中的第一个子事件的结束点相同,所述重配的第一CIS的第二个子事件的起始点是所述重配的第二CIS的第一个子事件的结束点;
    其中,所述重配的第一CIS和所述重配的第二CIS均包括多个CIS事件;所述重配的第一CIG包括多个CIG事件;每个CIG事件包括所述重配的第一CIS的一个CIS事件和所述重配的第二CIS的一个CIS事件;所述重配的第一CIS的每个CIS事件中包括N1个子事件,所述N1大于或者等于2;所述重配的第二CIS的每个CIS事件中包括N2个子事件,所述N2大于或者等于2;
    其中,所述电子设备从所述重配的第一CIS的锚点开始通过所述重配的第一CIS与所述第一耳塞传输音频数据,所述电子设备从所述重配的第二CIS的锚点开始通过所述重配的第二CIS与所述第二耳塞传输音频数据。
  15. 根据权利要求13所述的方法,其特征在于,在所述TWS耳机处于所述双耳状态的情况下,所述电子设备配置所述重配的第一CIG时,所述重配的第一CIS的锚点和所述重配的第二CIS的锚点均为所述重配的第一CIG的CIG锚点;
    其中,所述重配的第一CIG包括多个CIG事件;所述重配的第一CIG的CIG锚点是所述CIG事件的开始时间点;
    其中,所述电子设备从所述重配的第一CIS的锚点开始通过所述重配的第一CIS与所述第一耳塞传输音频数据,所述电子设备从所述重配的第二CIS的锚点开始通过所述重配的第二CIS与所述第二耳塞传输音频数据。
  16. 一种电子设备,其特征在于,包括:一个或多个处理器、存储器和无线通信模块;
    所述存储器和所述无线通信模块与所述一个或多个处理器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,所述电子设备执行如权利要求1-15中任一项所述的音频数 据传输方法。
  17. 一种蓝牙通信***,其特征在于,所述蓝牙通信***包括:真无线立体声TWS耳机,以及如权利要求16所述的电子设备。
  18. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-15中任一项所述的音频数据传输方法。
  19. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-15中任一项所述的音频数据传输方法。
PCT/CN2018/123243 2018-12-24 2018-12-24 应用于tws耳机单双耳切换的音频数据传输方法及设备 WO2020132839A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP18944882.2A EP3883259A4 (en) 2018-12-24 2018-12-24 METHOD AND DEVICE FOR AUDIO DATA TRANSMISSION APPLIED TO A MONAURAL AND BINAURAL MODES SWITCHING OF A TWS EARPHONE
PCT/CN2018/123243 WO2020132839A1 (zh) 2018-12-24 2018-12-24 应用于tws耳机单双耳切换的音频数据传输方法及设备
CN202210711245.XA CN115190389A (zh) 2018-12-24 2018-12-24 应用于tws耳机单双耳切换的音频数据传输方法及设备
US17/417,700 US11778363B2 (en) 2018-12-24 2018-12-24 Audio data transmission method applied to switching between single-earbud mode and double-earbud mode of TWS headset and device
CN202210712039.0A CN115175043A (zh) 2018-12-24 2018-12-24 应用于tws耳机单双耳切换的音频数据传输方法及设备
CN201880098184.6A CN112789866B (zh) 2018-12-24 2018-12-24 应用于tws耳机单双耳切换的音频数据传输方法及设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/123243 WO2020132839A1 (zh) 2018-12-24 2018-12-24 应用于tws耳机单双耳切换的音频数据传输方法及设备

Publications (1)

Publication Number Publication Date
WO2020132839A1 true WO2020132839A1 (zh) 2020-07-02

Family

ID=71129433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123243 WO2020132839A1 (zh) 2018-12-24 2018-12-24 应用于tws耳机单双耳切换的音频数据传输方法及设备

Country Status (4)

Country Link
US (1) US11778363B2 (zh)
EP (1) EP3883259A4 (zh)
CN (3) CN115190389A (zh)
WO (1) WO2020132839A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113115178A (zh) * 2021-05-12 2021-07-13 西安易朴通讯技术有限公司 音频信号处理方法及装置
CN114157942A (zh) * 2021-11-30 2022-03-08 广州番禺巨大汽车音响设备有限公司 一种控制真实无线立体声的音响***的方法及***
WO2022087924A1 (zh) * 2020-10-28 2022-05-05 Oppo广东移动通信有限公司 音频控制方法和设备
WO2022103237A1 (ko) * 2020-11-16 2022-05-19 엘지전자 주식회사 근거리 무선 통신 시스템에서 등시 데이터를 전송하기 위한 방법 및 이에 대한 장치
CN115086922A (zh) * 2021-03-11 2022-09-20 Oppo广东移动通信有限公司 一种蓝牙通信方法、蓝牙设备以及计算机存储介质
WO2022206270A1 (zh) * 2021-04-01 2022-10-06 Oppo广东移动通信有限公司 设备添加方法、装置、蓝牙芯片及设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220039041A1 (en) * 2018-12-07 2022-02-03 Huawei Technologies Co., Ltd. Point-to-Multipoint Data Transmission Method and Electronic Device
TWI779337B (zh) * 2020-08-24 2022-10-01 瑞昱半導體股份有限公司 藍牙耳機系統以及藍牙耳機收納暨充電盒
US20220256028A1 (en) * 2021-02-08 2022-08-11 Samsung Electronics Co., Ltd. System and method for simultaneous multi-call support capability on compatible audio devices
CN115086483B (zh) * 2021-12-31 2023-03-07 荣耀终端有限公司 一种协同数据流的控制方法以及相关设备
WO2023224214A1 (ko) * 2022-05-20 2023-11-23 삼성전자 주식회사 무선 환경 내의 적응적 통신을 위한 전자 장치 및 방법
CN114666777B (zh) * 2022-05-26 2022-07-29 成都市安比科技有限公司 一种蓝牙音频***的带宽提升方法
CN115002940B (zh) * 2022-08-02 2022-12-27 荣耀终端有限公司 蓝牙通信方法、装置及存储介质
CN115175065B (zh) * 2022-09-06 2023-01-17 荣耀终端有限公司 广播方法、tws耳机及存储介质
WO2024085664A1 (ko) * 2022-10-18 2024-04-25 삼성전자 주식회사 전자 장치 및 전자 장치에서 설정 변경에 따라 데이터를 송신 및/또는 수신하는 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089813A1 (en) * 2007-10-02 2009-04-02 Conexant Systems, Inc. Method and system for dynamic audio stream redirection
CN102187690A (zh) * 2008-10-14 2011-09-14 唯听助听器公司 在助听器***中转换双耳立体声的方法以及助听器***
CN105284134A (zh) * 2012-12-03 2016-01-27 索诺瓦公司 将音频信号无线流式传输到多个音频接收器设备
CN108696784A (zh) * 2018-06-06 2018-10-23 歌尔科技有限公司 一种无线耳机角色切换的方法、无线耳机及tws耳机
CN108718467A (zh) * 2018-06-06 2018-10-30 歌尔科技有限公司 一种语音数据的传输方法、无线耳机及tws耳机

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102113346B (zh) * 2008-07-29 2013-10-30 杜比实验室特许公司 用于电声通道的自适应控制和均衡的方法
US20120033620A1 (en) * 2010-08-03 2012-02-09 Nxp B.V. Synchronization for data transfers between physical layers
US8768252B2 (en) 2010-09-02 2014-07-01 Apple Inc. Un-tethered wireless audio system
US9949205B2 (en) 2012-05-26 2018-04-17 Qualcomm Incorporated Smart battery wear leveling for audio devices
DK2675189T3 (en) 2012-06-14 2015-11-09 Oticon As Binaural listening system with automatic mode can
US9693127B2 (en) 2014-05-14 2017-06-27 Samsung Electronics Co., Ltd Method and apparatus for communicating audio data
US10136429B2 (en) * 2014-07-03 2018-11-20 Lg Electronics Inc. Method for transmitting and receiving audio data in wireless communication system supporting bluetooth communication and device therefor
WO2014184395A2 (en) 2014-09-15 2014-11-20 Phonak Ag Hearing assistance system and method
CN105491469A (zh) 2014-09-15 2016-04-13 Tcl集团股份有限公司 一种基于耳机佩戴状态控制音频输出模式的方法及***
US10148453B2 (en) * 2016-02-24 2018-12-04 Qualcomm Incorporated Using update slot to synchronize to Bluetooth LE isochronous channel and communicate state changes
US10306072B2 (en) * 2016-04-12 2019-05-28 Lg Electronics Inc. Method and device for controlling further device in wireless communication system
US10034160B2 (en) * 2016-04-14 2018-07-24 Lg Electronics Inc. Method and apparatus for transmitting or receiving data using bluetooth in wireless communication system
US10798548B2 (en) * 2016-08-22 2020-10-06 Lg Electronics Inc. Method for controlling device by using Bluetooth technology, and apparatus
US10560974B2 (en) * 2016-09-11 2020-02-11 Lg Electronics Inc. Method and apparatus for connecting device by using Bluetooth technology
CN107277668B (zh) 2017-07-28 2019-05-31 广州黑格智能科技有限公司 一种双蓝牙耳机
US10652659B2 (en) * 2017-10-05 2020-05-12 Intel Corporation Methods and apparatus to facilitate time synchronization of audio over bluetooth low energy
CN107894881A (zh) 2017-10-18 2018-04-10 恒玄科技(上海)有限公司 蓝牙耳机的主从连接切换、通话监听和麦克切换的方法
US20220039041A1 (en) * 2018-12-07 2022-02-03 Huawei Technologies Co., Ltd. Point-to-Multipoint Data Transmission Method and Electronic Device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090089813A1 (en) * 2007-10-02 2009-04-02 Conexant Systems, Inc. Method and system for dynamic audio stream redirection
CN102187690A (zh) * 2008-10-14 2011-09-14 唯听助听器公司 在助听器***中转换双耳立体声的方法以及助听器***
CN105284134A (zh) * 2012-12-03 2016-01-27 索诺瓦公司 将音频信号无线流式传输到多个音频接收器设备
CN108696784A (zh) * 2018-06-06 2018-10-23 歌尔科技有限公司 一种无线耳机角色切换的方法、无线耳机及tws耳机
CN108718467A (zh) * 2018-06-06 2018-10-30 歌尔科技有限公司 一种语音数据的传输方法、无线耳机及tws耳机

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022087924A1 (zh) * 2020-10-28 2022-05-05 Oppo广东移动通信有限公司 音频控制方法和设备
EP4228285A4 (en) * 2020-10-28 2024-01-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. AUDIO CONTROL METHOD AND APPARATUS
WO2022103237A1 (ko) * 2020-11-16 2022-05-19 엘지전자 주식회사 근거리 무선 통신 시스템에서 등시 데이터를 전송하기 위한 방법 및 이에 대한 장치
CN115086922A (zh) * 2021-03-11 2022-09-20 Oppo广东移动通信有限公司 一种蓝牙通信方法、蓝牙设备以及计算机存储介质
WO2022206270A1 (zh) * 2021-04-01 2022-10-06 Oppo广东移动通信有限公司 设备添加方法、装置、蓝牙芯片及设备
CN113115178A (zh) * 2021-05-12 2021-07-13 西安易朴通讯技术有限公司 音频信号处理方法及装置
CN114157942A (zh) * 2021-11-30 2022-03-08 广州番禺巨大汽车音响设备有限公司 一种控制真实无线立体声的音响***的方法及***
CN114157942B (zh) * 2021-11-30 2024-03-26 广州番禺巨大汽车音响设备有限公司 一种控制真实无线立体声的音响***的方法及***

Also Published As

Publication number Publication date
US11778363B2 (en) 2023-10-03
EP3883259A4 (en) 2021-12-15
US20220078541A1 (en) 2022-03-10
EP3883259A1 (en) 2021-09-22
CN115175043A (zh) 2022-10-11
CN112789866A (zh) 2021-05-11
CN112789866B (zh) 2022-07-12
CN115190389A (zh) 2022-10-14

Similar Documents

Publication Publication Date Title
WO2020132839A1 (zh) 应用于tws耳机单双耳切换的音频数据传输方法及设备
WO2020113588A1 (zh) 一种点对多点的数据传输方法及电子设备
CN112868244B (zh) 一种点对多点的数据传输方法及设备
WO2020133183A1 (zh) 音频数据的同步方法及设备
WO2020107485A1 (zh) 一种蓝牙连接方法及设备
WO2020124581A1 (zh) 一种音频数据传输方法及电子设备
WO2020124610A1 (zh) 一种传输速率的控制方法及设备
CN113169915B (zh) 无线音频***、音频通讯方法及设备
WO2020077512A1 (zh) 语音通话方法、电子设备及***
CN112913321B (zh) 一种使用蓝牙耳机进行通话的方法、设备及***
WO2020124371A1 (zh) 一种数据信道的建立方法及设备
WO2020132907A1 (zh) 一种音频数据的通信方法及电子设备
WO2022135303A1 (zh) 一种tws耳机连接方法及设备
WO2020118641A1 (zh) 一种麦克风mic切换方法及设备
WO2022213689A1 (zh) 一种音频设备间语音互通的方法及设备
WO2021233398A1 (zh) 无线音频***、无线通讯方法及设备
CN113678481B (zh) 无线音频***、音频通讯方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944882

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018944882

Country of ref document: EP

Effective date: 20210614

NENP Non-entry into the national phase

Ref country code: DE