CN116112600A - Call volume adjusting method, electronic equipment and storage medium - Google Patents

Call volume adjusting method, electronic equipment and storage medium Download PDF

Info

Publication number
CN116112600A
CN116112600A CN202111328759.9A CN202111328759A CN116112600A CN 116112600 A CN116112600 A CN 116112600A CN 202111328759 A CN202111328759 A CN 202111328759A CN 116112600 A CN116112600 A CN 116112600A
Authority
CN
China
Prior art keywords
sound
generating unit
noise
application
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111328759.9A
Other languages
Chinese (zh)
Inventor
朱珂
张鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202111328759.9A priority Critical patent/CN116112600A/en
Publication of CN116112600A publication Critical patent/CN116112600A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The application provides a call volume adjusting method, electronic equipment and a storage medium, and relates to the technical field of terminals. The method comprises the following steps: the electronic equipment receives a first operation of a user, wherein the first operation is used for triggering the electronic equipment to initiate a call; the electronic device responds to the first operation and detects noise data of the environmental sound; the electronic equipment judges that the noise data is in a first noise section and outputs a sound signal through a first sound generating unit; the electronic equipment judges that the noise data is switched from a first noise section to a second noise section, and sound signals are output through the first sound generating unit and the second sound generating unit; wherein the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section.

Description

Call volume adjusting method, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a method for adjusting call volume, an electronic device, and a storage medium.
Background
At present, in a conversation scene of an electronic device (such as a mobile phone), a part of sound emitted by the mobile phone is transmitted into the ear of a user from a receiver, and the other part of sound is transmitted to an external environment through an opening of the receiver of the mobile phone, so that a phenomenon of sound leakage in a quiet environment is caused. Based on this, in the related art, the sound of the mobile phone during the call is reduced by some modes, so as to reduce the phenomenon of sound leakage. However, this in turn can lead to the user not hearing the sound made by the handset clearly in a noisy environment.
Disclosure of Invention
The application provides a call volume adjusting method, electronic equipment and a storage medium.
The application adopts the following technical scheme:
in a first aspect, a method for adjusting call volume is provided, and the method is applied to electronic equipment, wherein the electronic equipment comprises a first sounding unit and a second sounding unit; the method comprises the following steps: the electronic equipment receives a first operation of a user, wherein the first operation is used for triggering the electronic equipment to initiate a call; the electronic device responds to the first operation and detects noise data of the environmental sound; the electronic equipment judges that the noise data is in a first noise section and outputs a sound signal through a first sound generating unit; in the first noise section, the larger the noise data is, the larger the volume of the sound signal output by the first sound generating unit is; the electronic equipment judges that the noise data is switched from a first noise section to a second noise section, and sound signals are output through the first sound generating unit and the second sound generating unit; in the second noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is, and the smaller the volume of the sound signal output by the first sound generating unit is; in the second noise section, the smaller the noise data is, the smaller the volume of the sound signal output by the second sound generating unit is, and the larger the volume of the sound signal output by the first sound generating unit is; wherein the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section.
Based on the first aspect, when the electronic device initiates a call, the electronic device detects noise data of an environmental sound, and under the condition that the noise data is in a first noise section, the electronic device outputs a sound signal through a first sound generating unit; under the condition that the noise data is in a second noise section, the electronic equipment outputs sound signals through the first sound generating unit and the second sound generating unit; because the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section, the noise data in the first noise section is the noise data in the quiet environment, and the noise data in the second noise section is the noise data in the noisy environment; and, in the first noise section, the larger the noise data, the larger the volume of the sound signal output by the first sound generating unit; in the second noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is, and the smaller the volume of the sound signal output by the first sound generating unit is; in the second noise section, the smaller the noise data, the smaller the volume of the sound signal output by the second sound generating unit, and the larger the volume of the sound signal output by the first sound generating unit. That is, when the noise data is in different noise intervals, the electronic device can output the sound signal through different generating units, and the noise data is different, and the volume of the output sound signal is also different, so that the user can realize that the surrounding people are not influenced in the process of talking, the voice of the opposite party can be made clear in a noisy environment, and the talking experience of the user is improved.
In one possible design manner of the first aspect, the method further includes: the electronic equipment judges that the noise data is switched from the second noise interval to the third noise interval, and the second sounding unit outputs a sound signal; in the third noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is; wherein the maximum value of the noise data of the third noise section is greater than the maximum value of the noise data of the second noise section.
In this design, since the maximum value of the noise data in the third noise section is larger than the maximum value of the noise data in the second noise section, and the minimum value of the noise data in the second noise section is larger than the maximum value of the noise data in the first noise section, the noise data in the third noise section is noise data in a noisy environment; and under the third noise interval, the electronic equipment outputs a sound signal through the second sound generating unit; in the third noise interval, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is, so that the user can clearly see the sound of the other party in a noisy environment, and the conversation experience is improved.
In one possible design of the first aspect, the noise data in the first noise interval is less than 35dB; noise data in the second noise section is greater than or equal to 35dB and less than or equal to 65dB; the noise data in the third noise interval is greater than 65dB.
In one possible design manner of the first aspect, the method further includes: the electronic equipment adjusts the volume of the sound signals output by the first sound generating unit and/or the second sound generating unit according to the size of the noise data by adopting the following formula; l (L) 1 =(65-X)/30,L 2 = (X-35)/30; wherein L is 1 Indicating the volume of the sound signal output by the first sound generating unit, L 2 The volume of the sound signal output by the second sound generating unit is represented, and X represents noise data.
In the design mode, the electronic equipment can calculate the volume of the sound signals output by the first sound generating unit and the volume of the sound signals output by the second sound generating unit according to the formula, so that the first sound generating unit can play the sound according to the volume of the sound signals output by the first sound generating unit, the second sound generating unit can play the sound according to the volume of the sound signals output by the second sound generating unit, and the conversation experience is further improved.
In one possible design manner of the first aspect, the electronic device receives a first operation of a user, including: the method comprises the steps that the electronic equipment receives first operation of a user in a target communication application; the target communication application is an application allowing calling of a communication intelligent tuning function; the communication intelligent tuning function is that in the conversation process, the electronic equipment selects the first sound generating unit and/or the second sound generating unit to output sound signals according to the size of noise data.
In the design mode, when the electronic equipment receives the first operation of the user in the target communication application, the electronic equipment invokes the communication intelligent tuning function through the target communication application, and then the electronic equipment selects the first generating unit and/or the second generating unit to output the sound signal according to the size of the noise data, so that the power consumption of the equipment is reduced.
In one possible design manner of the first aspect, before the electronic device detects noise data of the environmental sound in response to the first operation, the method further includes: the electronic equipment judges that the electronic equipment comprises a first sounding unit and a second sounding unit, and a communication intelligent tuning function is started.
In the design mode, when the electronic equipment detects that the electronic equipment simultaneously comprises the first sound generating unit and the second sound generating unit, the electronic equipment starts the communication intelligent tuning function, so that the power consumption of the equipment is reduced.
In one possible design manner of the first aspect, before the electronic device detects noise data of the environmental sound in response to the first operation, the method further includes: displaying indication information after the electronic equipment is started, or displaying the indication information after the electronic equipment receives the first operation; the indication information is used for reminding a user to start a communication intelligent tuning function; and the electronic equipment responds to the preset operation of the user on the indication information, and starts the communication intelligent tuning function.
In this design, the electronic device may display the indication information after the electronic device is turned on, or display the indication information after the first operation is received; because the indication information is used for reminding the user of starting the communication intelligent tuning function, the user can select whether to start the communication intelligent tuning function according to the indication information, and user experience is improved while equipment power consumption is reduced.
In one possible design of the first aspect, the electronic device has a plurality of communication applications installed therein; the plurality of communication applications includes a system application and a third party application; the method further comprises the steps of: the electronic equipment sets a plurality of communication applications as target communication applications, and allows the communication applications to call a communication intelligent tuning function; or the electronic equipment sets the system application as a target communication application, allows the system application to call the communication intelligent tuning function, and prohibits the third party application from calling the communication intelligent tuning function.
In the design mode, the electronic equipment can set the system application and the third party application in the communication applications as target communication applications, and can also set the system application as target communication applications only, namely, the system application is allowed to call the communication intelligent tuning function, and the third party application is forbidden to call the communication intelligent tuning function, so that the user experience is further improved.
In one possible design of the first aspect, the electronic device has a plurality of communication applications installed therein; the plurality of communication applications includes a system application and a third party communication application; the method further comprises the steps of: displaying a setting interface by the electronic equipment; the setting interface comprises a plurality of application options, the application options are in one-to-one correspondence with the communication applications, and the application options are used for setting the capacity of calling the intelligent tuning function of the corresponding communication applications; and the electronic equipment responds to the selection operation of the user on the application options in the setting interface, sets the communication application selected by the user as a target communication application, and allows the communication application selected by the user to call the intelligent tuning function of the communication.
In the design mode, the user can select corresponding application options in the setting interface, and the electronic equipment sets the communication application selected by the user as a target communication application based on the communication application selected by the user, so that the communication application selected by the user is allowed to call the intelligent tuning function of the communication. That is, the user can set the corresponding target communication application according to the requirement, so that the user experience can be further improved.
In one possible design manner of the first aspect, before detecting noise data of the environmental sound, the method further includes: the electronic equipment determines that the electronic equipment uses the earphone to sound in the conversation process; wherein the earpiece sounding includes the electronic device outputting a sound signal using at least one of the first sounding unit and/or the second sounding unit.
In this design manner, when the electronic device outputs a sound signal by using the earphone or the loudspeaker in the conversation process, the communication intelligent tuning function does not need to be started, so that before the electronic device detects the noise data of the environmental sound, whether the electronic device uses the earphone to sound in the conversation process, namely, whether at least one of the first sound generating unit and/or the second sound generating unit is used for outputting the sound signal is also determined, and the power consumption of the device can be further reduced.
In one possible design of the first aspect, the first sound generating unit is a screen sound generator and the second sound generating unit is a receiver.
In a second aspect, an electronic device is provided that includes a first sound emitting unit, a second sound emitting unit, a microphone, a sensor, and a processor; the first sound generating unit, the second sound generating unit, the microphone and the sensor are coupled with the processor; the sensor is used for receiving a first operation of a user, and the first operation is used for triggering the electronic equipment to initiate a call; the processor is used for responding to the first operation and controlling the microphone to detect noise data of the environmental sound; the processor is used for judging that the noise data is in a first noise section and controlling the first sound generating unit to output sound signals; in the first noise section, the larger the noise data is, the larger the volume of the sound signal output by the first sound generating unit is; the processor is used for judging that the noise data is switched from the first noise section to the second noise section and controlling the first sounding unit and the second sounding unit to output sound signals; in the second noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is, and the smaller the volume of the sound signal output by the first sound generating unit is; in the second noise section, the smaller the noise data is, the smaller the volume of the sound signal output by the second sound generating unit is, and the larger the volume of the sound signal output by the first sound generating unit is; wherein the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section.
In a possible design manner of the second aspect, the processor is further configured to determine that the noise data is switched from the second noise interval to the third noise interval, and control the second sounding unit to output the sound signal; in the third noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is; wherein the maximum value of the noise data of the third noise section is greater than the maximum value of the noise data of the second noise section.
In one possible design of the second aspect, the noise data in the first noise section is less than 35dB; noise data in the second noise section is greater than or equal to 35dB and less than or equal to 65dB; the noise data in the third noise interval is greater than 65dB.
In a possible design manner of the second aspect, the processor is further configured to control, according to the size of the noise data, a volume of the sound signal output by the first sound generating unit and/or the second sound generating unit by adopting the following formula; l (L) 1 =(65-X)/30,L 2 = (X-35)/30; wherein the method comprises the steps of,L 1 Indicating the volume of the sound signal output by the first sound generating unit, L 2 The volume of the sound signal output by the second sound generating unit is represented, and X represents noise data.
In one possible design of the second aspect, the sensor is specifically configured to receive a first operation of the user at the target communication application; the target communication application is an application allowing calling of a communication intelligent tuning function; the communication intelligent tuning function is that in the conversation process, the electronic equipment selects the first sound generating unit and/or the second sound generating unit to output sound signals according to the size of noise data.
In one possible design manner of the second aspect, the processor is further configured to determine that the electronic device includes a first sound generating unit and a second sound generating unit, and turn on a communication intelligent tuning function.
In one possible design of the second aspect, the electronic device further includes a display screen; the display screen is used for displaying indication information after the electronic equipment is started, or is used for displaying the indication information after the sensor receives a first operation; the indication information is used for reminding a user to start a communication intelligent tuning function; the processor is also used for responding to the preset operation of the user on the indication information and starting the communication intelligent tuning function.
In one possible design of the second aspect, a plurality of communication applications are installed in the electronic device; the plurality of communication applications includes a system application and a third party application; the processor is also used for setting a plurality of communication applications as target communication applications, and allowing the plurality of communication applications to call the intelligent tuning function of the communication; or the processor is also used for setting the system application as a target communication application, allowing the system application to call the communication intelligent tuning function, and prohibiting the third party application from calling the communication intelligent tuning function.
In one possible design of the second aspect, a plurality of communication applications are installed in the electronic device; the plurality of communication applications includes a system application and a third party communication application; displaying a setting interface on a display screen; the setting interface comprises a plurality of application options, the application options are in one-to-one correspondence with the communication applications, and the application options are used for setting the capacity of calling the intelligent tuning function of the corresponding communication applications; the processor is also used for responding to the selection operation of the user on the application options in the setting interface, setting the communication application selected by the user as a target communication application, and allowing the communication application selected by the user to call the intelligent tuning function of the communication.
In a possible design manner of the second aspect, the processor is further configured to determine that the electronic device uses the earpiece to sound during the call; wherein the earpiece sounding includes the electronic device outputting a sound signal using at least one of the first sounding unit and/or the second sounding unit.
In one possible design manner of the second aspect, the first sound generating unit is a screen sound generator, and the second sound generating unit is a receiver.
The technical effects caused by any one of the design manners in the second aspect may be referred to the technical effects caused by the different design manners in the first aspect, which are not described herein.
Drawings
Fig. 1 is a schematic diagram of a screen sounding technology according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic diagram of a positional relationship between a receiver and a screen generator in an electronic device according to an embodiment of the present application;
fig. 4 is a schematic software structure of an electronic device according to an embodiment of the present application;
fig. 5a is a flowchart illustrating a method for adjusting call volume according to an embodiment of the present application;
fig. 5b is a second flow chart of a method for adjusting call volume according to the embodiment of the present application;
Fig. 5c is a flowchart illustrating a method for adjusting call volume according to an embodiment of the present application;
fig. 6 is a flow chart diagram of a method for adjusting call volume according to an embodiment of the present application;
fig. 7 is a flowchart of a method for adjusting call volume according to an embodiment of the present application;
fig. 8 is a flowchart illustrating a method for adjusting call volume according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein, in the description of the present application, "/" means that the related objects are in a "or" relationship, unless otherwise specified, for example, a/B may mean a or B; the term "and/or" in this application is merely an association relation describing an association object, and means that three kinds of relations may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. Also, in the description of the present application, unless otherwise indicated, "a plurality" means two or more than two. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural. In addition, in order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", and the like are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ. Meanwhile, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion that may be readily understood.
In order to clearly illustrate a call volume adjustment method provided in the embodiments of the present application, some terms that may appear in subsequent embodiments will be described first.
The screen sounding technology is to drive a screen (also called a display screen) and a structure of an electronic device through a vibration unit, and to transmit sound waves generated by vibration into ears of a user by using the screen as a vibration body. That is, when the user is talking, the earphone (such as a receiver) is not needed, the vibration unit in the electronic device drives the whole screen to sound, and the sound can be transmitted through the screen.
The vibration unit may be a diaphragm, for example. In some embodiments, the diaphragm is a metal sheet attached by a multilayer piezoelectric ceramic sheet. As shown in fig. 1, for example, the diaphragm may be subjected to alternating voltages, and then the diaphragm is continuously bent up and down along with the voltage changes, so as to drive the diaphragm to vibrate and sound.
Currently, when a user holds an electronic device (e.g., a mobile phone) for a call, sound is generally transmitted into the ear of the user through a receiver (also referred to as a "receiver") of the mobile phone. However, because the telephone receiver of the mobile phone needs to be provided with a hole at the top of the telephone receiver, a part of sound emitted by the mobile phone is transmitted into the ear of the user from the telephone receiver, and the other part of sound is transmitted to the external environment through the hole of the telephone receiver of the mobile phone, the problem that the telephone receiver leaks sound or the other party cannot hear is often encountered when the user is in a conversation process. For example, a phenomenon of sound leakage occurs in a quiet environment, resulting in leakage of user privacy.
Based on this, in some embodiments, the electronic device introduces a screen sounding technology, which triggers the screen to vibrate through the vibration unit to make a sound, so that the occurrence of the sound leakage phenomenon can be effectively reduced. But is easily obscured in noisy environments due to the limited volume of the screen sounding technology.
In contrast, in the embodiment of the present application, when the user holds the electronic device to communicate with other electronic devices, the electronic device detects noise data of the current environmental sound; under the condition that the noise data is in a first noise section, the electronic equipment outputs a sound signal through a first sound generating unit; under the condition that the noise data is in a second noise section, the electronic equipment outputs sound signals through the first sound generating unit and the second sound generating unit; in the case that the noise data is in the third noise section, the electronic device outputs a sound signal through the second sound emitting unit. That is, when the noise data is in different noise intervals, the electronic device can output sound signals through different sound generating units, so that a user can realize that surrounding people are not influenced in a quiet environment in the conversation process; the voice of the opposite party can be heard clearly in a noisy environment, and the conversation experience of the user is improved.
Wherein the minimum value of the noise data of the third noise section is larger than the maximum value of the noise data of the second noise section; the minimum value of the noise data of the second noise section is smaller than the maximum value of the noise data of the first noise section.
The first sound generating unit may be, for example, a screen sound generator; wherein, the screen generator outputs sound signals through a screen sounding technology. The second sound generating unit may be a receiver, for example. It should be noted that the first sound generating unit and the second sound generating unit may be other suitable sound generators, which is not limited in this application; the functions described in the above embodiments of the present application are all within the scope of protection of the present application.
In some embodiments, when the electronic device determines that the noise data is in the first noise section, the electronic device outputs a sound signal through the first sound generating unit; in the first noise section, the larger the noise data is, the larger the volume of the sound signal output by the first sound generating unit is; when the electronic equipment judges that the noise data is switched from the first noise section to the second noise section, the electronic equipment outputs sound signals through the first sound generating unit and the second sound generating unit; in the second noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is, and the smaller the volume of the sound signal output by the first sound generating unit is; in the second noise section, the smaller the noise data is, the smaller the volume of the sound signal output by the second sound generating unit is, and the larger the volume of the sound signal output by the first sound generating unit is; wherein the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section.
In addition, if the electronic equipment judges that the noise data is switched from the second noise section to the third noise section, the electronic equipment outputs a sound signal through the second sound generating unit; in the third noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is; wherein the maximum value of the noise data of the third noise section is greater than the maximum value of the noise data of the second noise section.
In this embodiment, since the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section, the maximum value of the noise data of the third noise section is larger than the maximum value of the noise data of the second noise section, the noise data in the second noise section can be considered to be larger than the noise data in the first noise section, and the noise data in the third noise section is larger than the noise data in the second noise section, that is, when the noise data of the environmental sound is in the first noise section, the current environment can be considered to be a quiet environment (that is, the noise of the current environment is small, even almost no noise); if the noise data of the current environmental sound is in the second noise section, the current environment can be considered as a normal environment (i.e. the noise of the current environment is general and does not cause too much influence on the user); if the noise data of the environmental sound is in the third noise section, the current environment is considered to be a noisy environment (i.e. the noise of the current environmental sound is large, which may affect the user call). On the basis, if the user holds the electronic equipment in the conversation process and the quiet environment is switched to the normal environment, the electronic equipment can switch the sound signal output by the first sound generating unit to the sound signal output by the first sound generating unit and the second sound generating unit at the same time; if the user holds the electronic equipment in the conversation process and the normal environment is switched to the noisy environment, the electronic equipment can output a sound signal by the second sound generating unit. In other words, the electronic device can collect noise data of the current environmental sound in real time, and dynamically adjust sound emitted by the electronic device by controlling the first sound emitting unit and/or the second sound emitting unit based on the difference of the noise data of the environmental sound.
It should be noted that, in the embodiment of the present application, during a call, the first noise section is switched to the second noise section; and the order of switching from the second noise section to the third noise section is not limited. It should be understood that, according to the environment where the user is located during the call, the second noise section may be switched to the first noise section, or the first noise section may be switched to the second noise section; or the switching between noise intervals in other scenes belongs to the protection scope of the embodiment of the application. That is, as long as the noise section where the noise data is located changes during the call, the electronic device can dynamically adjust the sound emitted by the electronic device according to the noise section where the noise data is located.
In one possible implementation, the electronic device may divide the call into three modes (e.g., a first mode, a second mode, and a third mode) based on noise data of the detected ambient sound, for example. The first mode corresponds to a first noise section, the second mode corresponds to a second noise section, and the third mode corresponds to a third noise section. For example, in a first mode, the electronic device may only control the screen generator to sound; in a second mode, the electronic device can control the receiver and the screen generator to produce sounds in a frequency division mode; in the third mode, the electronic device may control the receiver and the screen generator to sound simultaneously.
Note that, in the embodiment of the present application, the first mode may also be referred to as a privacy mode, the second mode may also be referred to as a normal mode, and the third mode may also be referred to as a loud volume mode.
In this application, the range of noise data may be, for example, [0, 100], in dB (decibel). The three modes described in the above embodiments refer to dividing the range of noise data into three sections, one section corresponding to each mode. It should be noted that the "pattern" is merely used to distinguish between different sections of noise data, and is merely an example of the present application, and is not limited to the present application. For example, "schema" may be replaced with "type", "interval", or other expressions, which are not listed here.
The method for adjusting the call volume according to the embodiment of the present application may be applied to mobile phones, tablet computers, personal computers (personal computer, PC), personal digital assistants (personal digital assistant, PDA), smart watches, netbooks, wearable electronic devices, augmented reality (augmented reality, AR) devices, virtual Reality (VR) devices, vehicle-mounted devices, smart cars, smart speakers, and other electronic devices, which is not limited in this embodiment of the present application.
Fig. 2 is a schematic structural diagram of the electronic device 100. Wherein the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
Wherein the sensor module 180 (e.g., a sensor) is configured to receive an operation that triggers the electronic device 100 to initiate a call.
It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
Illustratively, in the present embodiment, the processor 110 is configured to control the microphone 170C to detect noise data of an environmental sound in response to an operation of the electronic device 100 to initiate a call; the processor 110 is further configured to determine that the noise data is in the first noise section, and control the screen sounder 170E (e.g. the first sound generating unit) to output a sound signal; the processor 110 is further configured to determine that the noise data is switched from the first noise section to the second noise section, and control the screen speaker 170E (first speaker unit) and the receiver 170B (second speaker unit) to output the sound signal.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device. In other embodiments, the electronic device may also use different interfacing manners in the foregoing embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini-LED, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a screen generator 170E, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. Illustratively, in the present embodiment, the microphone 170C is also used to collect and detect noise data of ambient sounds.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The screen generator 170E, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. For example, the screen generator 170E may include a diaphragm that is continuously bent up and down by applying an alternating voltage to the diaphragm to convert an audio signal into a sound signal according to the voltage.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, audio, video, etc. files are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device and data processing by executing instructions stored in the internal memory 121. For example, in an embodiment of the present application, the processor 110 may include a storage program area and a storage data area by executing instructions stored in the internal memory 121, and the internal memory 121 may include a storage program area and a storage data area.
The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device (e.g., audio data, phonebook, etc.), and so forth. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
Taking the above electronic device as a mobile phone as an example, the positional relationship between the receiver 170B and the screen sounder 170E in the mobile phone is illustrated. Illustratively, as shown in fig. 3, the receiver 170B is disposed below the display screen of the mobile phone, and the receiver 170B has an opening at the top of the mobile phone, and when the mobile phone is in a call scenario, sound is transmitted into the ear of the user through the receiver 170B of the mobile phone. The screen generator 170E is disposed below the display screen of the mobile phone, and the screen generator 170E is located at a side of the receiver 10 away from the top of the mobile phone, and when the mobile phone is in a call scene, the screen generator 170E transmits sound waves generated by vibration into the user's ear. Referring to fig. 3, it can be seen that when a user holds a mobile phone to make a call, both the receiver 170B and the screen generator 170E of the mobile phone are close to the auricle of the user, so that the user can hear the sound emitted from both the receiver 170B and the screen generator 170E.
In some embodiments, the software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the invention takes an Android system with a layered architecture as an example, and illustrates a software architecture of the electronic device 100.
Fig. 4 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system may include an application layer (APP), a framework layer (FWK), a hardware abstraction layer (hardware abstraction layer, HAL), and a Linux kernel layer (Linux kernel). For clarity of describing the embodiments of the present application, in some embodiments, the software structural block diagram of the electronic device 100 shown in fig. 4 further includes the hardware of the electronic device 100 related to the foregoing embodiments. For example, the electronic device 100 further includes a receiver, a screen generator, and a microphone (mic).
For example, as shown in fig. 4, the application program layer may include applications such as a setup application, a system communication application (e.g., APP 3), a third party communication application (e.g., APP1, APP 2), and so on. Wherein the setup application comprises a setup interface.
The hardware abstraction layer is an interface layer between the Linux kernel layer and the hardware, and can be used for abstracting the hardware. Illustratively, as shown in FIG. 4, the hardware abstraction layer includes an audio interface (audio).
The Linux kernel layer provides a bottom layer driver for various hardware of the electronic device. Illustratively, as shown in FIG. 4, the Linux kernel layer includes an audio driver module.
The framework layer provides application programming interfaces (application programming interface, APIs) and programming services for application programs of the application layer. The framework layer includes some predefined functions. In some embodiments, as shown in fig. 4, the framework layer includes a hardware detection module, a call audio module, an audio channel detection module, a volume control module, an ambient sound identification module, a database, and the like.
The hardware detection module is used for detecting whether the electronic equipment comprises two sound generating devices, namely a receiver and a screen generator. In some embodiments, a developer may configure configuration parameters of both a receiver and a screen generator in an electronic device during development. The configuration parameters may be, for example, a configuration file, that is, a configuration file of the electronic device including a receiver and a configuration file of the screen generating device.
In this case, when the electronic device is powered on, the hardware detection module reads the configuration parameters, and when the hardware detection module reads the configuration parameters of the receiver and the configuration parameters of the screen generator at the same time, the hardware detection module determines that the electronic device includes two sound emitting devices, namely the receiver and the screen generator. Based on the intelligent tuning method, the electronic equipment has the function of intelligent tuning of the call.
When a user uses the electronic device to make a call, the call intelligent tuning function can automatically adjust the volume of sound (for example, the sound of the opposite party when the user is making a call) made by the electronic device according to the noise data of the environment where the user is currently making a call.
It should be noted that, in the above embodiment, the hardware detection module may read the configuration parameter when the electronic device is turned on each time; the hardware detection module may read the configuration parameter when the electronic device is powered on for the first time, which is not limited in the embodiment of the present application.
In some embodiments, when the electronic device is powered on, the hardware detection module determines that the electronic device includes a receiver and a screen sounder while the electronic device turns on the call smart tuning function. In other embodiments, when the hardware detection module of the electronic device determines that the electronic device includes a receiver and a screen sounder, and the user initiates any communication application on the electronic device, the electronic device automatically (reminds the user) to turn on the call smart tuning function (or communication smart tuning function). In still other embodiments, the electronic device turns on the call smart tuning function when the hardware detection module of the electronic device determines that the electronic device includes a receiver and a screen sounder and that the electronic device is in the process of a call.
Taking an example that the hardware detection module determines that the electronic device comprises a receiver and a screen sounder when the electronic device is started, the electronic device starts a call intelligent tuning function, in some embodiments, when the hardware detection module detects that the electronic device comprises the receiver and the screen sounder when the electronic device is started, the electronic device responds to operation of a user to start the call intelligent tuning function. Taking the electronic device as a mobile phone as an example, for example, when the mobile phone is started, and the mobile phone is detected to include two sound generating devices, namely a receiver and a screen sound generator, the mobile phone displays an interface 201 as shown in (1) in fig. 5a, and the interface 201 is an interface after the mobile phone is started. The interface 201 includes indication information 202; the indication 202 is used to alert the user to turn on the call smart tuning function. For example, the indication information is: whether to start the intelligent tuning function of the call. Then, in response to the user's operation of the yes control, the handset displays an interface 203 as shown in fig. 5a (2), the interface 203 including a "talk wisdom tuning" setting item 204. In other embodiments, when the hardware detection module detects that the mobile phone comprises a speaker and a screen generator, the hardware detection module reports the detection result to the setting application, so that the mobile phone automatically starts the intelligent tuning function of the call. Illustratively, as shown in fig. 5a (2), the setting interface 203 of the mobile phone automatically displays the "call wisdom tuning" setting item 204 (or application option) without requiring the user to perform an operation.
Taking an example that the hardware detection module of the electronic equipment determines that the electronic equipment comprises a receiver and a screen sounder, and when a user starts any communication application on the electronic equipment, the electronic equipment starts a call intelligent tuning function, in some embodiments, the hardware detection module of the electronic equipment determines that the electronic equipment comprises the receiver and the screen sounder, and on the basis, the electronic equipment starts the call intelligent tuning function in response to the operation of starting the communication application by the user. Taking the electronic device as a mobile phone as an example, as shown in (1) in fig. 5b, the mobile phone responds to the operation of the user on the phone application 205, and the mobile phone displays an interface 206 as shown in (2) in fig. 5b, where the interface 206 is a contact list interface in a call application. Wherein interface 206 includes indication information 207; the indication 207 is used to alert the user to turn on the call smart tuning function. For example, the indication information is: whether to start the intelligent tuning function of the call. Then, in response to the user's operation of the yes control (or preset operation), the mobile phone displays an interface 203 as shown in (3) of fig. 5b, and the interface 203 includes a "call wisdom tuning" setting item 204. In other embodiments, as shown in fig. 5C (1), the mobile phone displays an interface 208, which is list information of the contact C, and the interface 208 includes a call control 209, an information control 210, a video control 211, and social data (such as an account number of APP1, an account number of APP2, etc.). For example, the phone automatically turns on the call smart tuning function in response to a user's operation of call control 209. For another example, in response to a user's operation of any APP in the social profile, the handset displays an interface 203 as shown in fig. 5c (2), the interface 203 including a "talk wisdom tuning" setting item 204.
Taking a case that the hardware detection module of the electronic device determines that the electronic device comprises a receiver and a screen sounder, and the electronic device starts a call intelligent tuning function in a call process, in some embodiments, the electronic device starts the call intelligent tuning function in response to operation of a user.
The operation may be any one of a voice operation, a gesture operation, or a touch operation, for example. The touch operation may be, for example, a click operation or a slide operation. In some embodiments, the operation may also be an operation in which the user presses a power key.
Taking this operation as a voice operation as an example, for example, when the user holds the electronic device and is talking, if the user does not want to let other people hear the talking content at this time; or when the current environment is noisy, the user can input voice to the electronic equipment so as to enable the electronic equipment to start the intelligent tuning function of the call. The voice input by the user to the electronic device may be, for example: "open call intelligent tuning function".
Taking the operation as an example of the operation of pressing the power key by the user, for example, when the user holds the electronic device and calls, if the user does not want to let other people hear the call content at the moment; or when the current environment is noisy, the user can continuously press the power key twice so as to enable the electronic equipment to start the intelligent tuning function.
In the following embodiments of the present application, the hardware detection module of the electronic device is used to determine that the electronic device includes a receiver and a screen sounder, and the electronic device is exemplified by the electronic device opening a call intelligent tuning function in a call process. In connection with the above embodiment, the "talk-around wisdom tuning" setting item 204 is typically in an on state. In one possible example, when the "talk-around smart tuning" setting item 204 is in an on state, the talk-around smart tuning function is applicable to all communication applications (e.g., APP1, APP2, APP3, etc.) that are capable of talking, i.e., the electronic device allows all communication applications to invoke the talk-around smart tuning function. In another possible example, when the "call intelligent tuning" setting item 204 is in an on state, the call intelligent tuning function is only applicable to cellular calls, i.e., the call intelligent tuning function is turned on by default for the system communication application (or system application, such as APP 3) of the UI layer and turned off by default for other third party communication applications (or third party applications, such as APP1, APP2, etc.). In other words, the electronic device only allows the system communication application to invoke the call smart tuning function. In still other embodiments, the "talk-around" tuning settings 204 are in an on state when the talk-around tuning function is not applicable to all communication applications. On this basis, the electronic device displays a setting interface in response to the user's operation of the "call wisdom tuning" setting item 204. The setup interface includes application options; the application options include all communication applications (such as a system application and a third party application), and a user can select a corresponding application from the application options to allow calling of the call intelligent tuning function.
As shown in fig. 6 (1), the "call intelligent tuning" setting item 204 is in an on state, and the call intelligent tuning function is only suitable for cellular call at this time. Then, in response to the user's operation of the "talk-around tuning" setting item 204, the electronic device displays an interface 301 as shown in (2) of fig. 6, the interface 301 including all third party communication applications (e.g., APP1, APP2, etc.) supporting the talk-around tuning function. The user may select the corresponding application as one that allows call to the call smart tuning function. For example, in response to an operation (or selection operation) by which the user selects the WeChat application in the interface 301, the electronic device displays the interface 302 as shown in (3) in FIG. 6, i.e., the WeChat application is opened. That is, the user has authorized the WeChat application to invoke the right to talk smart tuning function. Of course, the user may also authorize other third party communication applications in the interface 301, or use the method to cancel (or modify) the authority of the third party call application to invoke the call intelligent tuning function.
After the user selects a corresponding third party communication application in the interface 301, the electronic device stores application information (e.g., package name) set by the user to allow call of the call intelligent tuning function. For example, in combination with the software architecture schematic of the electronic device shown in fig. 4, an application list set by the user and allowing call intelligent tuning functions is stored in a database in the framework layer of the electronic device.
It should be noted that, the operation of the user mentioned in the above embodiment may be, for example, any one of a voice operation, a gesture operation, or a touch operation. The touch operation may be, for example, a click operation or a slide operation. In addition, in the above embodiments, the electronic device is illustrated in fig. 5a to 6 as a mobile phone, and it should be understood that the electronic device may also be other electronic devices with a call function, which are not listed here.
The call audio module is used for detecting whether the electronic equipment is in a call state. Illustratively, when a user makes a call using a communication application (e.g., a telephony application) of the electronic device, the communication application sends a call request to the call audio module. The call request is used for acquiring the permission of the call application to use the audio device. That is, the call application sends a call request to the call audio module to inform the call audio module that a call is currently generated, and the usage rights of the audio device need to be obtained. Thus, the call audio module can detect that the electronic equipment is in a call state.
The audio device may include, for example, at least one of the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the screen generator 170E of the above embodiments.
In some embodiments, after the communication application sends a call request to the call audio module, the call request further includes application information (e.g., a packet name) of the call application. On the basis, when the call application is in a call by using the audio device, the call audio module sets application information of the call application of the current call according to the call request.
The audio channel detection module is used for detecting whether the electronic equipment uses the earphone to sound when talking. The earphone sounding refers to sounding by using at least one of a receiver or a screen sounder when the electronic equipment is in communication. Illustratively, the audio channel detection module receives audio channels broadcast by the system in real time. The audio channel comprises a loudspeaker channel, a wired earphone channel, a Bluetooth channel, a receiver channel and a screen sounder channel. For example, when the audio channel detection module receives that the audio channel broadcasted by the system is a receiver channel, the audio channel detection module determines that the electronic device uses the receiver to sound when talking, that is, the electronic device uses the receiver to sound.
It should be noted that, since the user may use the speaker to make a call when making a call with the electronic device, or use the wired earphone to make a call, or use the bluetooth earphone to make a call. It should be understood that when the electronic device uses a speaker, a wired earphone, or a bluetooth earphone to make a call, there is no sound leakage phenomenon and a phenomenon that the user hears unclear sounds, so in the above scenario, it is not necessary to adjust the sound production ratio of the receiver and the screen generator, that is, the electronic device does not need to use the call intelligent tuning function, and only when the electronic device uses the receiver to make a call, the electronic device can start the call intelligent tuning function. Based on this, in the embodiment of the present application, it is necessary to detect whether the electronic device is talking with the handset or not.
In some embodiments, after the call audio module determines the application information of the call application, the call audio module synchronizes the application information to the volume control module. And after receiving the application information, the volume control module determines whether the call application is a target communication application according to the application information. As shown in fig. 4, after the volume control module determines that the call application is the target communication application, the volume control module sends a trigger instruction to the environmental sound recognition module; the trigger instruction is used for indicating the environmental sound identification module to determine the noise data of the current environmental sound and reporting the determined noise data to the volume control module. The noise data may be, for example, noise decibels (in dB).
And then, the volume control module firstly determines a volume mode corresponding to the current environmental sound according to the noise data, and calculates respective sounding parameters of the receiver and the screen generator (such as the volume of sound signals output by the receiver and the screen generator) according to the volume mode and the noise data. The volume control module calls an audio interface of the hardware abstraction layer to send the sounding parameters of the receiver and the sounding parameters of the screen generator to an audio driving module of the Linux kernel layer, and the audio driving module drives the receiver and the screen generator to output audio according to the respective sounding parameters.
In the embodiment of the application, when the electronic equipment comprises two sound generating devices, namely the receiver and the screen generator, the electronic equipment can start a call intelligent tuning function. For example, taking the electronic device in fig. 4 as an example, a specific process of the electronic device executing the call volume adjustment method is described, and as shown in fig. 4 and fig. 7, the call volume adjustment method may include steps A1 to a20.
A1, the electronic equipment responds to the operation of a user on a setting interface to set an application list.
The application list comprises applications allowing calling of the call intelligent tuning function.
In some embodiments, the electronic device may detect whether the electronic device includes a receiver and a screen generator through a hardware detection module. When the hardware detection module detects that the electronic device includes a receiver and a screen generator, as shown in (1) of fig. 6, the setting interface of the electronic device automatically displays a "call wisdom tuning" setting item. The setting interface is an interface after a user starts a setting application.
It should be noted that, in general, the "call intelligent tuning" setting item included in the setting interface is in an open state, and in this case, the call intelligent tuning function of the electronic device is only applicable to cellular calls. In other words, the call intelligent tuning function is only turned on by default for system call applications and turned off by default for third party call applications.
The user may then click on the "talk-around" setting item included in the setting interface, and the electronic device displays an interface as shown in fig. 6 (2), which includes all third party communication applications (e.g., APP1 and APP2, etc.) that support the talk-around smart tuning function. The user may select the corresponding application as the application that allows the call smart tuning function to be invoked, i.e., the user may set a list of applications that allow the call smart tuning function to be invoked.
Illustratively, when the user sets APP1 and APP2 as applications that allow the call intelligent tuning function to be invoked, the application list includes APP1 and APP2.
As also shown in fig. 7, in some embodiments, after the user sets the list of applications that allow the call intelligent tuning function to be invoked, the electronic device stores the list of applications set by the user. Illustratively, the database stores information (e.g., package names) of respective applications in a list of applications set by the user. On the basis, the volume control module can read the content of the database once when the electronic equipment is started, registers and monitors with the database, and updates the read content of the database when the application list stored in the database changes.
A2, the electronic equipment receives the application list through the setting interface.
A3, the electronic equipment sends the application list to the database through the setting interface.
And A4, the electronic equipment stores an application list through the database.
A5, the electronic equipment monitors the database through the volume control module.
It should be noted that the sequence of the step A5 and the steps A1 to A4 is not limited. Step A5 may precede step A1 to step A4 or follow step A1 to step A4.
And A6, the electronic equipment responds to the first operation of the user on the communication application to initiate a call.
The communication applications may include a system communication application and a third party communication application. The user can use APP1 or APP2 third party communication application to make a call, or APP3 system communication application to make a call.
It should be noted that, for the illustration of the first operation, reference may be made to the above embodiments, and no further description is given here.
And A7, the electronic equipment sends a call request to the call audio module through the call application.
The call request is used for informing the call audio module that the call is generated, and the permission of the call application to use the audio device needs to be acquired.
The audio device may include at least one of the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the screen speaker 170E in the above embodiments, for example. Taking an audio device as a receiver as an example, the communication application can acquire the permission of using the receiver from the call audio module, and after the communication application acquires the permission, the communication application can use the receiver to perform a call.
And A8, the electronic equipment detects the call state according to the call request through the call audio module.
When the communication application sends a call request to the call audio module, the communication application informs that the call audio module has a call currently generated, and needs to obtain the use right of the audio device, so that the communication audio module can detect the call state of the electronic device, that is, the electronic device has a call currently generated.
In some embodiments, when the call audio module detects that a call is currently generated in the electronic device, the call audio module reports the current call state to the volume control module, so that the volume control module can acquire the call state of the electronic device. In other embodiments, the volume control module registers monitoring with the call audio module, and when the call audio module detects that the electronic device currently has a call, the volume control module can trigger callback to acquire the call state in time.
And A9, the electronic equipment reports the call state to the volume control module through the call audio module.
A10, the electronic equipment detects an audio channel during conversation through the audio channel detection module.
The electronic equipment detects an audio channel during the call through the audio channel detection module, and determines whether to use the earphone to sound or not during the call according to the audio channel.
The earphone sounding refers to sounding by using at least one of a receiver or a screen sounder when the electronic equipment is in communication. Illustratively, the audio channel detection module receives audio channels broadcast by the system in real time. The audio channel comprises a loudspeaker channel, a wired earphone channel, a Bluetooth channel, a receiver channel and a screen sounder channel. For example, when the audio channel detection module receives that the audio channel broadcasted by the system is a receiver channel, the audio channel detection module determines that the electronic device uses the receiver to sound when talking, that is, the electronic device uses the receiver to sound.
Correspondingly, when a user selects different audio channels to carry out a call, the audio channel detection module reports the detected audio channels to the volume control module, so that the volume control module can acquire the audio channel of the current call in real time, and whether the electronic equipment uses the earphone to sound is determined. If yes, the electronic equipment determines to execute intelligent tuning of the call.
It should be noted that, since the user may use the speaker to make a call when making a call with the electronic device, or use the wired earphone to make a call, or use the bluetooth earphone to make a call. It should be understood that when the electronic device uses a speaker, a wired earphone, or a bluetooth earphone to make a call, there is no sound leakage phenomenon and a phenomenon that the user hears unclear sounds, so in the above scenario, it is not necessary to adjust the sound production ratio of the receiver and the screen generator, that is, the electronic device does not need to use a call intelligent tuning function, and only when the electronic device uses a "receiver" to make a call, the volume control module of the electronic device will execute the call intelligent tuning function.
A11, the electronic equipment reports the audio channel to the volume control module through the audio channel detection module.
A12, the electronic equipment determines application information of the call application according to the call request through the call audio module.
Illustratively, after the call application sends a call request to the call audio module, the call request further includes application information (e.g., a packet name) of the call application. On the basis, when the call application is in a call by using the audio device, the call audio module sets application information of the call application of the current call according to the call request.
A13, the electronic equipment synchronizes application information to the volume control module through the call audio module.
A14, the electronic equipment determines a target communication application according to the application information of the call application through the volume control module.
Illustratively, after the volume control module receives the application information of the call application, the volume control module matches the application information of the communication application with an application list stored in the database. For example, as shown in fig. 8, if the database does not store the application list, the volume control module determines whether the current call is a cellular call; if so, the communication application is the target communication application. Where cellular calls refer to calls made by a user using a "phone application" in an electronic device. For example, when the volume control module determines that the application information of the communication application of the current call is: when com.android.incallui, the volume control module determines that the call application of the current call is a telephone application, i.e. the current call is a cellular call. If the application list is stored in the database, the volume control module judges whether the call application is in the application list, and if so, the volume control module determines that the call application is the target communication application. The application list includes application information of the third party communication application (i.e., application information of the third party communication application set by the user).
It should be noted that, if the current call application is not the target call application, the volume control module of the electronic device does not execute the call intelligent tuning function.
And A15, the electronic equipment sends a trigger instruction for detecting the environmental sound to the environmental sound identification module through the volume control module.
And A16, the electronic equipment acquires noise data of the environmental sound through the environmental sound identification module, and determines the volume mode of the current environmental sound according to the noise data.
In some embodiments, in combination with the software architecture diagram of the electronic device shown in fig. 4, the environmental sound recognition module collects and calculates noise data within a preset time period through the microphone after receiving the trigger command. And then the microphone sends the noise data to the environmental sound identification module so that the environmental sound identification module can determine the volume mode of the current environmental sound according to the noise data.
Illustratively, the noise data of the ambient sound satisfies the following formula:
Figure BDA0003348123200000151
wherein SPL is noise data, p e For the effective value, p, of the ambient sound picked up by the microphone ref Is the effective value of the reference ambient sound. In some embodiments, p ref =2×10 -5 Pa。
It should be noted that the preset duration may be, for example, 10s, 20s, 30s or other suitable durations, which is not limited in this embodiment of the present application.
After the environmental sound recognition module acquires the noise data of the current environmental sound, the environmental sound recognition module can determine the mode of the current environmental sound according to the noise data. By way of example, it is generally considered that the noise data of the environmental sound in a scene of a light whisper is approximately between 20dB and 30dB, and in this scene, the noise of the environmental sound has little influence on the user, and thus the mode of the environmental sound in this scene can be considered as a privacy mode. The noise data of the environmental sound in the scene in the normal room is approximately between 50dB and 60dB, and in this scene, the noise of the environmental sound is the noise that the user can receive, so that the mode of the environmental sound in this scene can be regarded as the normal mode. The noise data of the environmental sound in an outdoor (e.g., traffic road) scene is about 70dB to 90dB, and in this scene, the noise of the environmental sound has a great influence on the user, and the user is often required to increase the volume conversation, so that the mode of the environmental sound in this scene can be regarded as a loud volume mode.
For the noise data of the corresponding environmental sounds in the different scenes, as shown in the following table 1, a corresponding relationship between the volume mode of the environmental sounds and the noise data is provided in the embodiment of the application.
TABLE 1
Mode of ambient sound Noise data
Privacy mode <35dB
Normal mode 35dB~65dB
High volume mode >65dB
Note that the correspondence between the ambient sound volume pattern and the noise data shown in table 1 is merely an example, and is not limited to the present application. It should be understood that other suitable correspondence between the volume pattern of the environmental sound and the noise data should also fall within the scope of the embodiments of the present application.
And A17, the electronic equipment reports the noise data and the volume mode of the environmental sound to the volume control module through the environmental sound identification module.
And A18, the electronic equipment determines respective sounding parameters of the receiver and the screen sounder according to the noise data and the volume mode of the environmental sound through the volume control module.
For example, when the volume mode of the environmental sound is the privacy mode, the volume control module of the electronic device only controls the screen sounder to sound. For example, the volume control module controls the sounding parameter of the receiver to be 0, and the sounding parameter of the screen sounder to be 1.0; that is, when the electronic device is talking, if the volume mode of the environmental sound is the privacy mode, the sound producing proportion of the receiver of the electronic device is 0% (i.e. no sound production), and the sound producing proportion of the screen generator of the electronic device is 100% (i.e. sound production is performed entirely by the screen generator).
For another example, when the volume mode of the environmental sound is the normal mode, the volume control module of the electronic device controls the receiver and the screen sounder to make a frequency division sound. In this case, the parameters of the receiver satisfy the following formula, by way of example: l (L) 1 =(X-Q 1 ) M; the parameters of the screen sounder satisfy the following formula: l (L) 2 =(Q 2 -X)/M; wherein L is 1 Representing the sounding parameters of the receiver, L 2 Representing sound parameters of a screen generator, X representing noise data of a current ambient sound, Q 1 Represents a first preset value, Q 2 Representing a second preset value and M representing a third preset value.
In some embodiments, Q 1 =35,Q 2 =65, m=30. On the basis, for example, when the volume mode of the environmental sound isWhen the noise data is 50dB, the sounding parameter of the receiver is 0.5, and the sounding parameter of the screen sounder is 0.5; that is, when the electronic device is talking, if the ambient sound is in the normal mode, the sound emission ratio of the receiver of the electronic device is 50%, and the sound emission ratio of the screen sounder of the electronic device is 50%.
For another example, when the volume mode of the environmental sound is the big volume mode, the volume control module of the electronic device only controls the receiver to sound. For example, the volume control module controls the parameters of the receiver to be 1.0, and the parameters of the screen sounder to be 0; that is, when the electronic device is talking, if the ambient sound volume mode is the high volume mode, the sound emission ratio of the receiver of the electronic device is 100% (i.e. the receiver is completely used for sound emission), and the transmission ratio of the screen sound emitter of the electronic device is 0% (i.e. no sound emission).
And A19, the electronic equipment sends sounding parameters of the receiver and sounding parameters of the screen sounder to the audio driving module through the volume control module.
For example, the volume control module may invoke the audio interface of the hardware abstraction layer to send the sound parameters of the receiver and the sound parameters of the screen sounder to the audio driver module.
A20, the electronic equipment drives the receiver and the screen sounder to output audio according to respective sounding parameters through the audio driving module.
In some embodiments, the audio driving module includes a power amplifier, the volume control module sends the sounding parameters of the receiver and the screen sounder to the power amplifier of the audio driving module, and after the processing of the power amplifier, the receiver and the screen sounder are finally driven to output audio according to the sounding parameters.
It should be noted that, in the steps A1 to A20, A8 to A11 are optional steps. Also, the order between steps A1-A20 is not limited in the embodiments of the present application.
In summary, in the embodiment of the present application, when the user holds the electronic device to communicate with other electronic devices, the electronic device may detect noise data of the current environmental sound; under the condition that the noise data is in a first noise section, the electronic equipment outputs a sound signal through a first sound generating unit; under the condition that the noise data is in a second noise section, the electronic equipment outputs sound signals through the first sound generating unit and the second sound generating unit; in the case that the noise data is in the third noise section, the electronic device outputs a sound signal through the second sound emitting unit. That is, when the noise data is in different noise intervals, the electronic device can output sound signals through different sound generating units, so that a user can realize that surrounding people are not influenced in a quiet environment in the conversation process; the voice of the opposite party can be heard clearly in a noisy environment, and the conversation experience of the user is improved.
It should be noted that, in some embodiments, some or all of the steps in the above embodiments (e.g., A1-a20 in the flow shown in fig. 7) may be recombined without conflict to form new embodiments. This embodiment of the present application will not be specifically described.
Fig. 8 is a flow chart of a method for adjusting call volume according to an embodiment of the present application. The method may include S501-S507.
S501, the electronic equipment judges whether the intelligent tuning function of the call is supported.
If the electronic equipment supports the intelligent call tuning function, the electronic equipment can continue to execute the call volume adjusting method provided by the embodiment of the application; if the electronic device does not support the call intelligent tuning function, the electronic device does not have the condition of executing the call volume adjusting method provided by the embodiment of the application.
For example, when the electronic device includes both a receiver and a screen sounder, the electronic device supports a call smart tuning function.
S502, the electronic equipment responds to the operation of a user on a setting interface to set an application list.
For example, when the electronic device supports the call intelligent tuning function, the setting interface of the electronic device automatically displays a "call intelligent tuning" setting item. The user may then set an application in the "call intelligent tuning" setting that allows call intelligent tuning functions to be invoked.
It should be noted that, in general, the "intelligent tuning for call" setting item is in an on state, and the intelligent tuning for call function is only suitable for cellular call, and is applied to other third party communications by default. In this case, the user needs to set other corresponding third party communication applications in the "call wisdom tuning" setting item. In other words, the application list set by the user only includes the third party communication application.
S503, the electronic device judges whether a call is generated.
For example, when a user initiates a call using a certain call application, the call application may send a call request to a call audio module of the electronic device, where the call request is used to obtain permission of the call application to use the audio device. That is, the call application sends a call request to the call audio module, so as to inform the call audio module that a call is currently generated, and the use right of the audio device needs to be obtained, so when the call audio module of the electronic device receives the call request sent by the call application, the electronic device determines that a call is generated.
In some embodiments, when the electronic device determines that a call is generated, the electronic device continues to execute the call volume adjustment method provided in the embodiments of the present application; when the electronic equipment judges that no call is generated, the electronic equipment does not execute the call volume adjusting method.
S504, the electronic equipment judges whether to sound by using the earphone.
The electronic device detects whether the electronic device uses the earphone to sound when talking through the audio channel detection module. For example, the audio channel detection module receives audio channels broadcast by the system in real time. The audio channel comprises a loudspeaker channel, a wired earphone channel, a Bluetooth channel, a receiver channel and a screen sounder. When the audio channel detection module detects that the audio channel is a receiver channel, the audio channel detection module determines that the electronic equipment uses the receiver to sound, namely the electronic equipment uses the receiver to sound.
S505, the electronic equipment judges whether the call application is a target communication application.
It should be noted that, the call request sent by the communication application to the call audio module of the electronic device includes application information of the communication application. And then the call audio module synchronizes the application information to the volume control module, and the volume control module judges whether the call application is a target communication application according to the application information.
In addition, after the user finishes setting the application list through the setting interface, the electronic equipment stores the application list set by the user in the database. In this case, after the call audio module synchronizes the application information to the volume control module, the volume control module matches the application information with the application information in the application list, so as to determine whether the communication application is a target communication application.
In some embodiments, when the electronic device determines that the communication application is the target communication application, the electronic device continues to execute the call volume adjustment method provided in the embodiments of the present application; when the electronic equipment judges that the call application is not the target communication application, the electronic equipment does not execute the call volume adjusting method.
After the electronic device performs the steps S501-S505, if the electronic device supports the intelligent tuning function of the call, and the electronic device uses the earpiece to sound during the call of the electronic device through the target communication application, the electronic device triggers the environmental sound detection function. Illustratively, an ambient sound identification module of the electronic device collects and calculates noise data of the ambient sound through the microphone and determines a mode of the current ambient sound based on the noise data.
S506, the electronic equipment determines sounding parameters of each receiver and the screen sounder according to the noise data and the mode of the environmental sound.
Illustratively, the electronic device determines, via the volume control module, respective sound emission parameters of the receiver and the screen sound emitter.
S507, the electronic equipment outputs audio according to the sounding parameters of the receiver and the screen sounder.
The electronic device drives the receiver and the screen sounder to output audio according to respective sounding parameters through the audio driving module.
It should be noted that, for the illustration and the beneficial effects of S501-S507, reference may be made to the illustration and the beneficial effects of step A1-step a12 in the above embodiments, which are not described in detail herein. In addition, the foregoing embodiments are only given by way of example, and some application scenarios and specific implementations provided in the embodiments of the present application are provided. The application scenarios and specific implementation manners of the method provided by the embodiment of the application include, but are not limited to, the application scenarios and implementation manners described in the above examples. For example, the scene to which the method provided in the embodiment of the present application is applicable may also be a voice playing scene, a video playing scene, a music playing scene, and other suitable scenes, and specific implementation manners thereof may refer to the foregoing embodiments and will not be described herein in detail.
The embodiment of the application provides electronic equipment, which can comprise: the device comprises a first sound generating unit, a second sound generating unit, a microphone, a sensor and a processor; the first sound generating unit, the second sound generating unit, the microphone and the sensor are coupled with the processor; the sensor is used for receiving a first operation of a user, and the first operation is used for triggering the electronic equipment to initiate a call; the processor is used for responding to the first operation and controlling the microphone to detect noise data of the environmental sound; the processor is used for judging that the noise data is in a first noise section and controlling the first sound generating unit to output sound signals; in the first noise section, the larger the noise data is, the larger the volume of the sound signal output by the first sound generating unit is; the processor is used for judging that the noise data is switched from the first noise section to the second noise section and controlling the first sounding unit and the second sounding unit to output sound signals; in the second noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is, and the smaller the volume of the sound signal output by the first sound generating unit is; in the second noise section, the smaller the noise data is, the smaller the volume of the sound signal output by the second sound generating unit is, and the larger the volume of the sound signal output by the first sound generating unit is; wherein the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section.
In some embodiments, the first sounding unit, the second sounding unit, the microphone, the sensor and the processor are further configured to perform the respective functions or steps performed by the electronic device in the foregoing embodiments of the method for adjusting call volume. Note that the electronic device structure may refer to the structure of the electronic device 100 shown in fig. 3. The technical effects brought by the electronic device can participate in the technical effects brought by the call volume adjusting method, and are not repeated here.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

1. The method for adjusting the call volume is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a first sounding unit and a second sounding unit; the method comprises the following steps:
the electronic equipment receives a first operation of a user, wherein the first operation is used for triggering the electronic equipment to initiate a call;
the electronic equipment responds to the first operation and detects noise data of environmental sounds;
the electronic equipment judges that the noise data is in a first noise section, and outputs a sound signal through the first sound generating unit; in the first noise section, the larger the noise data is, the larger the volume of the sound signal output by the first sound generating unit is;
the electronic equipment judges that the noise data is switched from the first noise interval to a second noise interval, and sound signals are output through the first sound generating unit and the second sound generating unit; in the second noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is, and the smaller the volume of the sound signal output by the first sound generating unit is; in the second noise section, the smaller the noise data is, the smaller the volume of the sound signal output by the second sound generating unit is, and the larger the volume of the sound signal output by the first sound generating unit is;
Wherein the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section.
2. The method according to claim 1, wherein the method further comprises:
the electronic equipment judges that the noise data is switched from the second noise interval to a third noise interval, and sound signals are output through the second sound generating unit; in the third noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is;
wherein the maximum value of the noise data of the third noise section is greater than the maximum value of the noise data of the second noise section.
3. A method according to claim 1 or 2, characterized in that,
noise data in the first noise interval is less than 35dB; the noise data in the second noise section is greater than or equal to 35dB and less than or equal to 65dB; the noise data in the third noise interval is greater than 65dB.
4. A method according to any one of claims 1-3, wherein the method further comprises:
the electronic equipment adjusts the volume of the sound signals output by the first sound generating unit and/or the second sound generating unit according to the size of the noise data by adopting the following formula;
L 1 =(65-X)/30,L 2 =(X-35)/30;
Wherein L is 1 Representing the volume of the sound signal output by the first sound generating unit, L 2 And X represents the volume of the sound signal output by the second sound generating unit and X represents the noise data.
5. The method of any of claims 1-4, wherein the electronic device receiving a first operation of a user comprises:
the electronic equipment receives a first operation of a user in a target communication application; the target communication application is an application allowing calling of a communication intelligent tuning function; the communication intelligent tuning function is that the electronic equipment selects the first sound generating unit and/or the second sound generating unit to output sound signals according to the size of the noise data in the conversation process.
6. The method of claim 5, wherein prior to the electronic device detecting noise data for an ambient sound in response to the first operation, the method further comprises:
the electronic equipment judges that the electronic equipment comprises the first sound generating unit and the second sound generating unit, and starts the communication intelligent tuning function.
7. The method of claim 5, wherein prior to the electronic device detecting noise data for an ambient sound in response to the first operation, the method further comprises:
Displaying indication information after the electronic equipment is started, or displaying indication information after the electronic equipment receives the first operation; the indication information is used for reminding a user to start the communication intelligent tuning function;
and the electronic equipment responds to the preset operation of the user on the indication information, and starts the communication intelligent tuning function.
8. The method of any one of claims 5-7, wherein a plurality of communication applications are installed in the electronic device; the plurality of communication applications includes a system application and a third party application; the method further comprises the steps of:
the electronic equipment sets the plurality of communication applications as the target communication application, and allows the plurality of communication applications to call the communication intelligent tuning function; or,
and the electronic equipment sets the system application as the target communication application, allows the system application to call the communication intelligent tuning function, and prohibits the third party application from calling the communication intelligent tuning function.
9. The method of any one of claims 5-8, wherein a plurality of communication applications are installed in the electronic device; the plurality of communication applications include a system application and a third party communication application; the method further comprises the steps of:
The electronic equipment displays a setting interface; the setting interface comprises a plurality of application options, the application options are in one-to-one correspondence with the communication applications, and the application options are used for setting the capability of calling the intelligent tuning function of the communication corresponding to the communication applications;
and the electronic equipment responds to the selection operation of the user on the application options in the setting interface, sets the communication application selected by the user as the target communication application, and allows the communication application selected by the user to call the communication intelligent tuning function.
10. The method according to any one of claims 1-9, wherein prior to said detecting noise data of an ambient sound, the method further comprises:
the electronic equipment determines that the electronic equipment uses an earphone to sound in the conversation process; wherein the earpiece sounding includes the electronic device outputting a sound signal using at least one of the first sounding unit and/or the second sounding unit.
11. The method of any one of claims 1-10, wherein the first sound generating unit is a screen sound generator and the second sound generating unit is a receiver.
12. An electronic device comprising a first sound generating unit, a second sound generating unit, a microphone, a sensor, and a processor; the first sound generating unit, the second sound generating unit, the microphone and the sensor are coupled with the processor;
The sensor is used for receiving a first operation of a user, and the first operation is used for triggering the electronic equipment to initiate a call;
the processor is used for responding to the first operation and controlling the microphone to detect noise data of environmental sounds;
the processor is used for judging that the noise data is in a first noise section and controlling the first sound generating unit to output a sound signal; in the first noise section, the larger the noise data is, the larger the volume of the sound signal output by the first sound generating unit is;
the processor is used for judging that the noise data is switched from the first noise interval to the second noise interval and controlling the first sound generating unit and the second sound generating unit to output sound signals; in the second noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is, and the smaller the volume of the sound signal output by the first sound generating unit is; in the second noise section, the smaller the noise data is, the smaller the volume of the sound signal output by the second sound generating unit is, and the larger the volume of the sound signal output by the first sound generating unit is;
wherein the minimum value of the noise data of the second noise section is larger than the maximum value of the noise data of the first noise section.
13. The electronic device of claim 12, wherein the electronic device comprises a memory device,
the processor is further used for judging that the noise data is switched from the second noise interval to a third noise interval and controlling the second sounding unit to output a sound signal; in the third noise section, the larger the noise data is, the larger the volume of the sound signal output by the second sound generating unit is;
wherein the maximum value of the noise data of the third noise section is greater than the maximum value of the noise data of the second noise section.
14. The electronic device according to claim 12 or 13, characterized in that,
noise data in the first noise interval is less than 35dB; the noise data in the second noise section is greater than or equal to 35dB and less than or equal to 65dB; the noise data in the third noise interval is greater than 65dB.
15. The electronic device of any of claims 12-14, wherein,
the processor is further used for controlling the volume of the sound signals output by the first sound generating unit and/or the second sound generating unit according to the size of the noise data by adopting the following formula;
L 1 =(65-X)/30,L 2 =(X-35)/30;
wherein L is 1 Representing the volume of the sound signal output by the first sound generating unit, L 2 And X represents the volume of the sound signal output by the second sound generating unit and X represents the noise data.
16. The electronic device of any one of claims 12-15, wherein the sensor is configured to receive a first operation by a user at a target communication application; the target communication application is an application allowing calling of a communication intelligent tuning function; the communication intelligent tuning function is that the processor selects the first sound generating unit and/or the second sound generating unit to output sound signals according to the size of the noise data in the conversation process.
17. The electronic device of claim 16, wherein the electronic device comprises a memory device,
the processor is also used for judging that the electronic equipment comprises the first sound generating unit and the second sound generating unit, and starting the communication intelligent tuning function.
18. The electronic device of claim 16, wherein the electronic device further comprises: a display screen;
the display screen is used for displaying indication information after the electronic equipment is started; or the display screen is used for displaying indication information after the sensor receives the first operation; the indication information is used for reminding a user to start the communication intelligent tuning function;
The processor is also used for responding to the preset operation of the user on the indication information and starting the communication intelligent tuning function.
19. The electronic device of any one of claims 16-18, wherein a plurality of communication applications are installed in the electronic device; the plurality of communication applications includes a system application and a third party application;
the processor is further configured to set the plurality of communication applications as the target communication application, and allow the plurality of communication applications to call the communication intelligent tuning function; or,
the processor is further configured to set the system application as the target communication application, allow the system application to call the communication intelligent tuning function, and prohibit the third party application from calling the communication intelligent tuning function.
20. The electronic device of any one of claims 16-19, wherein a plurality of communication applications are installed in the electronic device; the plurality of communication applications includes a system application and a third party application;
the display screen is used for displaying a setting interface; the setting interface comprises a plurality of application options, the application options are in one-to-one correspondence with the communication applications, and the application options are used for setting the capability of calling the intelligent tuning function of the communication corresponding to the communication applications;
The processor is further configured to set the communication application selected by the user as the target communication application in response to a user selecting an application option in the setting interface, and allow the communication application selected by the user to call the communication intelligent tuning function.
21. The electronic device of any one of claims 12-20, wherein,
the processor is also used for determining that the electronic equipment uses an earphone to sound in the conversation process; wherein the earpiece sounding includes the electronic device outputting a sound signal using at least one of the first sounding unit and/or the second sounding unit.
22. The electronic device of any one of claims 12-21, wherein the first sound generating unit is a screen sound generator and the second sound generating unit is a receiver.
CN202111328759.9A 2021-11-10 2021-11-10 Call volume adjusting method, electronic equipment and storage medium Pending CN116112600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111328759.9A CN116112600A (en) 2021-11-10 2021-11-10 Call volume adjusting method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111328759.9A CN116112600A (en) 2021-11-10 2021-11-10 Call volume adjusting method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116112600A true CN116112600A (en) 2023-05-12

Family

ID=86256712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111328759.9A Pending CN116112600A (en) 2021-11-10 2021-11-10 Call volume adjusting method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116112600A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160189715A1 (en) * 2014-12-26 2016-06-30 Panasonic Intellectual Property Corporation Of America Speech recognition device and method
CN109032556A (en) * 2018-07-09 2018-12-18 Oppo广东移动通信有限公司 Sounding control method, device, electronic device and storage medium
CN109361797A (en) * 2018-10-30 2019-02-19 维沃移动通信有限公司 A kind of vocal technique and mobile terminal
CN110191221A (en) * 2019-04-15 2019-08-30 深圳市万普拉斯科技有限公司 Audio-frequency processing method, circuit and terminal
CN110460915A (en) * 2019-07-29 2019-11-15 上海联影医疗科技有限公司 Volume adjustment device and adaptive talkback system
CN112383655A (en) * 2020-11-02 2021-02-19 Oppo广东移动通信有限公司 Electronic device, sound enhancement method for electronic device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160189715A1 (en) * 2014-12-26 2016-06-30 Panasonic Intellectual Property Corporation Of America Speech recognition device and method
CN109032556A (en) * 2018-07-09 2018-12-18 Oppo广东移动通信有限公司 Sounding control method, device, electronic device and storage medium
CN109361797A (en) * 2018-10-30 2019-02-19 维沃移动通信有限公司 A kind of vocal technique and mobile terminal
CN110191221A (en) * 2019-04-15 2019-08-30 深圳市万普拉斯科技有限公司 Audio-frequency processing method, circuit and terminal
CN110460915A (en) * 2019-07-29 2019-11-15 上海联影医疗科技有限公司 Volume adjustment device and adaptive talkback system
CN112383655A (en) * 2020-11-02 2021-02-19 Oppo广东移动通信有限公司 Electronic device, sound enhancement method for electronic device, and storage medium

Similar Documents

Publication Publication Date Title
WO2021147415A1 (en) Method for implementing stereo output and terminal
CN107493500B (en) Multimedia resource playing method and device
CN109062535B (en) Sound production control method and device, electronic device and computer readable medium
WO2023015940A1 (en) Mobile terminal that prevents sound leakage and sound output method for mobile terminal
CN109302664A (en) Display screen and its sound output position control method and device
WO2020107290A1 (en) Audio output control method and apparatus, computer readable storage medium, and electronic device
CN113835518A (en) Vibration control method and device, vibration device, terminal and storage medium
CN113438364B (en) Vibration adjustment method, electronic device, and storage medium
CN108093132B (en) Terminal device and ringtone volume control method
CN116137639A (en) Cross-equipment audio data transmission method and electronic equipment
WO2018058331A1 (en) Method and apparatus for controlling volume
CN116112600A (en) Call volume adjusting method, electronic equipment and storage medium
CN115086888A (en) Message notification method and device and electronic equipment
CN113259810A (en) Audio module and terminal equipment
CN111866226B (en) Terminal and sound production method
CN114257673A (en) Loudspeaker adjusting method and device, electronic equipment and storage medium
WO2023160204A1 (en) Audio processing method, and electronic device
WO2023284403A1 (en) Audio processing method and device
CN111225318A (en) Audio adjusting method and device and electronic equipment
CN114567840B (en) Audio output method and device, mobile terminal and storage medium
WO2024152690A1 (en) Sound channel mode switching method, device, and storage medium
WO2024016229A1 (en) Audio processing method and electronic device
CN211457354U (en) Audio module and terminal equipment
WO2024093452A1 (en) Call control method, electronic device and storage medium
CN117135262B (en) Conversation method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination