CN109089192B - Method for outputting voice and terminal equipment - Google Patents

Method for outputting voice and terminal equipment Download PDF

Info

Publication number
CN109089192B
CN109089192B CN201810877952.XA CN201810877952A CN109089192B CN 109089192 B CN109089192 B CN 109089192B CN 201810877952 A CN201810877952 A CN 201810877952A CN 109089192 B CN109089192 B CN 109089192B
Authority
CN
China
Prior art keywords
sound
voice
unit
sound generating
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810877952.XA
Other languages
Chinese (zh)
Other versions
CN109089192A (en
Inventor
雷钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810877952.XA priority Critical patent/CN109089192B/en
Publication of CN109089192A publication Critical patent/CN109089192A/en
Application granted granted Critical
Publication of CN109089192B publication Critical patent/CN109089192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/84Protecting input, output or interconnection devices output devices, e.g. displays or monitors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides a method for outputting voice and terminal equipment, which are applied to the technical field of communication and can solve the problem of poor privacy in the process of outputting voice by the terminal equipment. The scheme is applied to the terminal equipment, and the terminal equipment is provided with at least two sounding units; determining a first sound generating unit from the at least two sound generating units, wherein the first sound generating unit is the sound generating unit which is closest to the ear of the user in the at least two sound generating units; and outputting the voice through the first voice unit. Specifically, the method and the device can be applied to a scene that the terminal device outputs voice by adopting a screen sounding technology in a receiver mode.

Description

Method for outputting voice and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a method for outputting voice and terminal equipment.
Background
With the wider application of the full-screen, the application of the screen sounding technology in the terminal equipment is also wider and wider. In a conventional screen sounding technology, an exciter is arranged below a screen of a terminal device, and the terminal device vibrates the screen as a whole through the exciter so as to realize output of voice from the screen. I.e. the terminal device outputs speech through the whole screen.
However, when the voice output mode of the terminal device is the earphone mode, the voice output by the terminal device is easy to leak, which results in poor privacy of the process of outputting the voice by the terminal device.
Disclosure of Invention
The embodiment of the invention provides a method for outputting voice and terminal equipment, and aims to solve the problem of poor privacy in the process of outputting voice by the terminal equipment.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a method for outputting a voice, which is applied to a terminal device, where the terminal device is provided with at least two sounding units; determining a first sound generating unit from the at least two sound generating units, wherein the first sound generating unit is the sound generating unit which is closest to the ear of the user in the at least two sound generating units; and outputting the voice through the first voice unit.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device is provided with at least two sound generating units; the terminal device includes: a determining module and a first output module; the determining module is used for determining a first sound generating unit from the at least two sound generating units, wherein the first sound generating unit is the sound generating unit which is closest to the ears of the user in the at least two sound generating units; and the first output module is used for outputting the voice through the first sound-emitting unit determined by the determination module.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the computer program implements the steps of the method for outputting speech according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the method for outputting speech according to the first aspect.
In the embodiment of the invention, at least two sounding units are arranged in the terminal equipment. Specifically, the terminal device may determine a first sound generating unit from the at least two sound generating units, where the first sound generating unit is a sound generating unit closest to the ear of the user from the at least two sound generating units; and outputs the voice through the first sound-emitting unit. Based on the scheme, the terminal equipment controls the first sound emitting unit closest to the ear of the user to output the voice, so that the terminal equipment mainly outputs the voice through sound emission of the area where the first sound emitting unit is located in the terminal equipment, namely, the area except the area in the terminal equipment can not emit the sound, and other users are not easy to hear the voice output by the terminal equipment. Therefore, the privacy of the voice output process of the terminal equipment can be improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for outputting speech according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a position distribution of a sound generating unit according to an embodiment of the present invention;
FIG. 4 is a second flowchart illustrating a method for outputting speech according to an embodiment of the present invention;
fig. 5 is a third schematic flowchart illustrating a method for outputting speech according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a possible structure of a terminal device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "connected" and "connected" should be interpreted broadly, for example, as a fixed connection, a detachable connection, or an integral connection; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In embodiments of the present invention, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may comprise direct contact between the first and second features, or may comprise direct contact between the first and second features through another feature not in direct contact. Also, the first feature being "above" and "overlying" the second feature includes the first feature being directly above and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature "under" or "beneath" a second feature includes a first feature that is directly under and obliquely below the second feature, or simply means that the first feature is at a lower level than the second feature. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first sound generating unit, the second sound generating unit, and the like are used to distinguish different sound generating units, and are not used to describe a specific order of the sound generating units.
The method for outputting the voice and the terminal device provided by the embodiment of the invention can be applied to the process that the terminal device outputs the voice by adopting the screen sounding technology.
Specifically, in the embodiment of the present invention, at least two sound units are disposed in the terminal device. Wherein, because terminal equipment can control every sound producing unit in its two at least sound producing units respectively, consequently terminal equipment can be according to the demand respectively through one or more sound producing unit output pronunciation in two at least sound producing units, if through the sound producing unit output pronunciation that is nearest among two at least sound producing units apart from the user's ear, and other sound producing unit output pronunciation outside this sound producing unit of not going out. Therefore, the problem of poor privacy in the process of outputting voice by the terminal equipment can be solved.
It should be noted that, in the method for outputting a voice provided in the embodiment of the present invention, the execution main body may be a terminal device, or a Central Processing Unit (CPU) of the terminal device, or a control module in the terminal device for executing the method for outputting a voice. In the embodiment of the present invention, a method for a terminal device to execute a voice output is taken as an example, and the method for outputting a voice provided in the embodiment of the present invention is described.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied by the method for outputting speech provided by the embodiment of the present invention, taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the method for outputting a voice provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the method for outputting a voice may run based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the method for outputting the voice provided by the embodiment of the invention by running the software program in the android operating system.
The method for outputting speech provided by the embodiment of the present invention is described in detail below with reference to the flowchart of the method for outputting speech shown in fig. 2. Wherein, although a logical order of methods of outputting speech provided by embodiments of the present invention is shown in a method flow diagram, in some cases, the steps shown or described may be performed in an order different than here. For example, the method of outputting voice shown in fig. 2 may include S201 and S202:
s201, the terminal device determines a first sound generating unit from the at least two sound generating units, wherein the first sound generating unit is the sound generating unit which is closest to the ears of the user in the at least two sound generating units.
The voice output mode of the terminal device may include an earphone mode and a speaker mode.
It can be understood that, in a scenario where the voice output mode of the terminal device is the earpiece mode, the user generally needs that the voice output by the terminal device is not heard by other users, that is, the privacy requirement on the process of outputting the voice by the terminal device is high.
Alternatively, the terminal device may determine the first sound emitting unit from the at least two sound emitting units in a scenario where the voice output mode of the terminal device is the earpiece mode.
Optionally, at least two sound units in the terminal device may be distributed below a screen of the terminal device, or below a bezel of the terminal device. Of course, the at least two sound units may also be disposed at other positions in the terminal device, which is not described again in this embodiment of the present invention.
It should be noted that the terminal device may determine one or more first sound emitting units from the at least two sound emitting units, where the one or more first sound emitting units are located at the same distance from the user's ear. That is, the number of sound emission units closest to the ears of the user among the at least two sound emission units may be one or more.
It is understood that the number of sound units closest to the ears of the user among the at least two sound units in the terminal device is usually one, i.e. the number of first sound units is one.
Optionally, in this embodiment of the present invention, the terminal device may execute the S201 once every certain time (e.g., 1 minute), or the terminal device may execute the S201 when the terminal device is to output a voice.
S202, the terminal device outputs voice through the first voice unit.
Optionally, when the terminal device determines that the obtained number of the first sound emitting units is multiple, the terminal device may output the voice through the multiple first sound emitting units at the same time. Alternatively, the terminal device may output a voice through any one of the plurality of first sound emitting units.
It can be understood that, in the embodiment of the present invention, each of the at least two sound emitting units included in the terminal device corresponds to a different area in the terminal device. That is, one sound emitting unit can make the area where the sound emitting unit is located in the terminal device emit sound to output voice, and the other areas except the area in the terminal device do not output voice.
It should be noted that the method for outputting voice provided by the embodiment of the present invention is applied to a terminal device, where at least two sounding units are provided in the terminal device. Specifically, the terminal device may determine a first sound generating unit from the at least two sound generating units, where the first sound generating unit is a sound generating unit closest to the ear of the user from the at least two sound generating units; and outputs the voice through the first sound-emitting unit. Based on the scheme, the terminal equipment controls the first sound emitting unit closest to the ear of the user to output the voice, so that the terminal equipment mainly outputs the voice through sound emission of the area where the first sound emitting unit is located in the terminal equipment, namely, the area except the area in the terminal equipment can not emit the sound, and other users are not easy to hear the voice output by the terminal equipment. Therefore, the privacy of the voice output process of the terminal equipment can be improved.
In a possible implementation manner, in the embodiment of the present invention, at least two sound emitting units may be disposed below a screen of the terminal device.
It should be noted that each of the at least two sound units below the screen of the terminal device corresponds to an area in the screen, and the area corresponding to each sound unit is different. Wherein, a sound production unit vibration can drive the regional vibration sound production in the screen that this sound production unit corresponds.
Optionally, at least two sound generating units arranged in the terminal device are uniformly distributed below a screen of the terminal device. Therefore, the terminal device can respectively control any sound generating unit of the at least two sound generating units to vibrate and generate sound by driving the area in the screen corresponding to the sound generating unit to vibrate.
Optionally, each of the at least two sound units in the terminal device may be connected to the screen, for example, each sound unit may be pasted below the screen of the terminal device.
Exemplarily, as shown in fig. 3, a schematic diagram of a position distribution of a sound generating unit provided in an embodiment of the present invention is shown. Fig. 3 (a) shows an inner surface 301 of the screen 30, and the inner surface 301 is a surface of the screen 30 to which the sound emission unit 302, the sound emission unit 303, the sound emission unit 304, and the sound emission unit 305 are adhered. While (b) in fig. 3 shows the outer surface 306 of the screen 30, the shaded area on the screen 30 shown in (b) in fig. 3 may be an area on the screen 30 where the user's ear is mapped, which falls within an area corresponding to one sound emitting unit. I.e. the sound generating unit is the sound generating unit closest to the user's ear.
In addition, it should be noted that the outer surface 306 of the screen 30 in the terminal device shown in fig. 3 is disposed opposite to the inner surface 301, and the outer surface 306 is in contact with air and is a surface of the screen 30 closer to the ear of the user. Specifically, the sound generating unit of the terminal device vibrates to drive the screen 30 to vibrate and generate the voice, which is output through the inner surface 301 and then transmitted to the ear of the user through the outer surface 306.
Specifically, the screen 30 shown in fig. 3 (a) and 3 (b) includes a region P1, a region P2, a region P3, and a region P4. The sound emitting unit 302 corresponds to the area P1, that is, the terminal device controls the sound emitting unit 302 to vibrate, and can drive the area P1 in the screen 30 to vibrate and emit sound; the sound emitting unit 303 corresponds to the area P2, that is, the terminal device controls the sound emitting unit 303 to vibrate, and can drive the area P2 in the screen 30 to vibrate and emit sound; the sound emitting unit 304 corresponds to the area P3, that is, the terminal device controls the sound emitting unit 304 to vibrate, and can drive the area P3 in the screen 30 to vibrate and emit sound; and the sound emitting unit 305 corresponds to the area P4, that is, the terminal device controls the sound emitting unit 305 to vibrate, and can drive the area P4 in the screen 30 to vibrate and emit sound.
For example, in the embodiment of the present invention, the at least two sound generating units in the terminal device may include the sound generating unit 302, the sound generating unit 303, the sound generating unit 304, and the sound generating unit 305 described above.
Specifically, in the method for outputting a speech according to the embodiment of the present invention, the step S202 may be implemented by the step S202':
s202', the terminal equipment controls the first sound-emitting unit to vibrate so as to drive the screen to vibrate and emit sound, and then voice is output.
Illustratively, in conjunction with each sound emitting unit shown in fig. 3 (a), the first sound emitting unit in the embodiment of the present invention may be the sound emitting unit 302 shown in fig. 3 (a). That is, the terminal device may control the sound emission unit 302 to vibrate to drive the screen 30 to vibrate for sound emission, specifically, to drive the area P1 in the screen 30 to vibrate for sound emission, and output voice.
It is understood that, in the case that the voice output mode of the terminal device is an earphone mode, when the ear of the user is close to the screen, the voice output by vibrating the sound production in the area of the user's ear mapped on the screen can be heard by the user. That is, the first sound generating unit closest to the ear of the user among the at least two sound generating units in the terminal device vibrates to drive the screen corresponding to the first sound generating unit to vibrate and generate the output voice, so that the requirement of the user for listening to the voice can be met.
Optionally, in an embodiment of the present invention, each of the at least two sound units is made of a piezoelectric material.
The piezoelectric material is a material capable of realizing mutual conversion between mechanical vibration and alternating current.
Specifically, the sound generating unit in the embodiment of the present invention can convert a voice signal (i.e., alternating current) into mechanical vibration, and the screen is driven to vibrate by the vibration of the sound generating unit, i.e., the screen is driven to vibrate and generate sound to output voice. The terminal device controls the sound production unit to drive the screen to vibrate and produce the voice, and the voice is output and changes along with the change of the voice signal received by the sound production unit, for example, the voice changes along with the change of the frequency of the voice signal.
Optionally, in the embodiment of the present invention, one sound generating unit may be an exciter or a driver, so that the terminal device controls the sound generating unit to drive the screen to vibrate and generate sound to output voice.
Optionally, in the embodiment of the present invention, when the voice output mode of the terminal device is the speaker mode, the terminal device may control at least two sound generating units to vibrate, so as to drive the screen to vibrate and generate sound, and output stereo surround voice. The phase of the voice signal for controlling the vibration of each of the at least two sound units may be set according to actual requirements, and the embodiment of the present invention is not particularly limited.
It will be appreciated that in a scenario in which the speech output mode of the terminal device is in handset mode, the user will typically bring the screen of the terminal device close to the ear.
It should be noted that, in the method for outputting voice provided in the embodiment of the present invention, because the terminal device controls the first sound emitting unit closest to the ear of the user to vibrate to drive the screen to vibrate and emit sound so as to output voice, the terminal device mainly vibrates and emits sound by the area in the screen corresponding to the first sound emitting unit, that is, the area except the area in the screen may not vibrate and emit sound, that is, other users cannot easily hear the voice output by the terminal device. Therefore, the privacy of the voice output process of the terminal equipment can be further improved.
In a possible implementation manner, in the method for outputting a voice provided by the embodiment of the present invention, the terminal device may determine the distance between the at least two sound units and the ear of the user by using an ultrasonic ranging method to determine the first sound unit. Specifically, the above S201 may include S201a, S201b, and S201 c. Exemplarily, as shown in fig. 4, a flow chart of another method for outputting speech according to an embodiment of the present invention is shown. As shown in fig. 4, S201 shown in fig. 2 may be replaced with S201a, S201b, and S201 c:
s201a, the terminal device obtains corresponding distance values for each of the at least two sound units, respectively, to obtain at least two distance values.
The distance value corresponding to each sound production unit is used for representing the distance between the sound production unit and the ear of the user.
In addition, it is understood that the terminal device may control the sound emitting unit to drive the screen to vibrate for sound emission and output the voice, and may control the sound emitting unit to drive the screen to vibrate for ultrasonic wave output.
The sound production unit provided by the embodiment of the invention can convert the ultrasonic signals into mechanical vibration so as to drive the screen to vibrate and output ultrasonic waves. And after the ultrasonic wave that terminal equipment passed through the sound production unit output met the barrier, the ultrasonic wave that is reflected back by the barrier reachs this screen, can make this screen vibration and drive the sound production unit vibration to make the sound production unit turn into the alternating current with mechanical vibration, realize that terminal equipment passes through the sound production unit promptly and receives the ultrasonic wave. When the terminal device controls the screen of the sound generating unit to drive the sound generating unit to generate sound and output ultrasonic waves, the signal received by the terminal device may be a signal with a fixed frequency, for example, the fixed frequency is greater than 20000 Hertz (HZ).
Specifically, sound generating unit (like first sound generating unit) among the terminal equipment vibrates to this terminal equipment can begin the timing when the drive screen vibrates and outputs the ultrasonic wave, and this terminal equipment can stop the timing when receiving the ultrasonic wave that the barrier (like user's ear) reflected back through this sound generating unit. The propagation speed of the ultrasonic wave in the air is 340m/S (meters per second), and according to the time (denoted as T) counted by the terminal device, the distance (denoted as S) between the screen of the terminal device (specifically, the sound generating unit in the screen area where the ultrasonic wave is emitted) and the ear of the user can be calculated, that is: and S is 340T/2.
To sum up, the terminal device can vibrate through controlling every sound producing unit in at least two sound producing units to drive the screen vibration and output the ultrasonic wave, thereby obtain the distance numerical value that a sound producing unit corresponds.
S201b, the terminal device determines a minimum first distance value from the at least two distance values.
For example, in connection with each sound emitting unit in the terminal device shown in fig. 3, the distance value corresponding to the sound emitting unit 302 may be recorded as distance value 1, the distance value corresponding to the sound emitting unit 303 may be recorded as distance value 2, the distance value corresponding to the sound emitting unit 304 may be recorded as distance value 3, and the distance value corresponding to the sound emitting unit 305 may be recorded as distance value 4.
For example, the at least two distance values may include a distance value of 1 to a distance value of 4, and the first distance value may be a distance value of 1.
S201c, the terminal device determines that the sound production unit corresponding to the first distance value is the first sound production unit.
Illustratively, the sound unit 302 corresponding to the distance value 1 is the first sound unit.
It should be noted that, in the method for outputting a voice provided by the embodiment of the present invention, because the terminal device can obtain, for each of the at least two sound units, a distance value corresponding to each sound unit, so that the terminal device can determine, according to one distance value, a distance from one sound unit to the ear of the user, and thus the terminal device can determine the first sound unit closest to the ear of the user in the at least two sound units. Therefore, the terminal device can control the first sound-emitting unit to vibrate to drive the screen to vibrate and emit sound so as to output voice.
In a possible implementation manner, in the method for outputting a voice according to the embodiment of the present invention, when the terminal device outputs a voice through the first sound emitting unit, other areas except for an area where the first sound emitting unit is located in the terminal device may also output a voice. Specifically, in order to further improve the privacy of the voice output by the terminal device, the method for outputting the voice provided by the embodiment of the present invention may further include, after the foregoing step 201, S203, and for example, after S202, may further include S203:
s203, the terminal device outputs a suppressed voice through each second sound generating unit of the at least one second sound generating unit respectively so as to output at least one suppressed voice, wherein the phase of each suppressed voice is opposite to the phase of the voice.
The at least one second sound generating unit is the other sound generating units except the first sound generating unit in the at least two sound generating units.
It is understood that, since the phase of the voice output by the first sound generating unit (i.e., the phase of the sound wave of the voice) and the phase of the suppressed voice output by one second sound generating unit (i.e., the phase of the sound wave of the suppressed voice) are opposite, the suppressed voice output by one second sound generating unit can suppress the voice output by the first sound generating unit.
Typically, the intensity of the suppressed speech output by one of the second sound units is less than the intensity of the speech output by the first sound unit.
It should be noted that, in the method for outputting a voice according to the embodiment of the present invention, after the terminal device outputs a voice through the first sound generating unit, the terminal device may further enable each of the at least two second sound generating units to output a suppressed voice; and the phase of one suppressed voice is opposite to the phase of the voice output by the first sound generating unit, so that when the terminal equipment outputs the voice through the area where the first sound generating unit is located, the voice is not output by the area except the area in the terminal equipment. Therefore, the privacy of the voice output process of the terminal equipment is further improved.
In a possible implementation manner, according to the method for outputting voice provided by the embodiment of the present invention, in the process that the terminal device controls the first unit to vibrate to drive the screen of the terminal device to vibrate and generate sound to output voice, the screen of the terminal device mainly vibrates through the area corresponding to the first sound generation unit, but other areas except the area in the screen may also vibrate slightly, that is, the other areas may also vibrate and generate sound to output voice. Therefore, in order to ensure privacy of the process of outputting the voice by the terminal device, the terminal device needs to suppress vibration sound emission in the other area.
Specifically, in the embodiment of the present invention, S202 may be replaced with S202a, and S203 may be replaced with S203 a. Exemplarily, as shown in fig. 5, a flow chart of another method for outputting speech according to an embodiment of the present invention is shown. With reference to fig. 2, S202 shown in fig. 2 may be replaced with S202a shown in fig. 5, and S203a is further included after S202 a:
s202a, the terminal device controls the first sound emitting unit to convert the received first voice signal into mechanical vibration, so as to drive a first area corresponding to the first sound emitting unit on the screen to vibrate and emit sound, and output voice from the first area.
For example, the first sound emitting unit may be the sound emitting unit 302 shown in fig. 3, and the first area may be the area P1 shown in fig. 3.
It is emphasized that the first speech signal (denoted as speech signal 1) is an electrical signal for the terminal device to generate speech (denoted as speech 1) to be output. Specifically, the terminal device may send a first voice signal to the first sound emitting unit through its CPU, so that the first sound emitting unit receives the first voice signal.
It can be understood that when the terminal device controls the first sound-emitting unit to vibrate to drive the first area of the screen to vibrate for sound emission, the vibration of the first area may drive other areas of the screen except the first area to vibrate slightly.
Optionally, the volume of the voice output by the terminal device mainly depends on the amplitude of the vibration of the first area in the screen, and the amplitude of the vibration of the first area depends on the strength of the first voice signal.
Optionally, after receiving an operation that the user triggers the terminal device to adjust the volume of the output voice, the terminal device may adjust the intensity of the first voice signal in response to the operation, so as to adjust the intensity of the first voice signal in the terminal device, and further adjust the volume of the voice output by the terminal device.
S203a, the terminal device controls each of the at least one second sound generating unit to convert the received second voice signal into mechanical vibration, so as to drive a second area corresponding to the one second sound generating unit on the screen to generate a vibration sound, and output a suppressed voice, and further output at least one suppressed voice.
Wherein the phase of the first voice signal is opposite to the phase of the second voice signal.
It should be noted that, in the embodiment of the present invention, when the screen of the terminal device vibrates and drives the sound generating unit to vibrate, the terminal device may control the sound generating unit to convert the mechanical vibration of the terminal device into a voice signal. Of course, the terminal device may also report the detected voice signal to a CPU in the terminal device. Subsequently, the terminal device may adjust the phase and/or intensity of the voice signal by the CPU, and transmit the adjusted voice signal to the sound emitting unit.
Specifically, in the embodiment of the present invention, the terminal device may detect noise in an environment, for example, the terminal device may control any one of the at least one second sound emitting unit to detect a voice over the second area in the screen corresponding to the second sound emitting unit. When the second area in the screen of the terminal device vibrates along with the voice (such as voice 1) output by the first area and drives the second sound generating unit corresponding to the second area to vibrate, the second sound generating unit can convert the mechanical vibration into a voice signal (recorded as voice signal 2). Of course, the terminal device may control the second sound generating unit to report the voice signal 2 to the CPU.
It can be understood that the speech signal 1 and the speech signal 2 are both speech signals corresponding to the speech 1, wherein the strength of the speech signal 1 is greater than that of the speech signal 2, and the phases of the speech signal 1 and the speech signal 2 are the same at the same time.
Optionally, after receiving the voice signal 2 reported by any second sound generating unit, the CPU in the terminal device may generate, in real time, a voice signal 3 (i.e., a second voice signal) having the same strength and an opposite phase to the voice signal 2. Subsequently, after the second sound production unit in the terminal device receives the voice signal 3 sent by the CPU, the voice signal 3 can be converted into mechanical vibration to drive the second area to produce sound through vibration. For example, the second sound emitting unit may be the generating unit 303 shown in fig. 3, and the second area may be the area P2 shown in fig. 3.
It can be understood that, because the phases of the voice signal 1 and the voice signal 3 at the same time are opposite, that is, the phases of the voice signal 2 and the voice signal 3 at the same time are opposite, the vibration direction of the first area of the screen in the terminal device vibrating and driving the second area to vibrate at the same time may be opposite to the vibration direction of the second area vibrating when the terminal device controls the second sound generating unit to vibrate and drive the second area to vibrate, and the vibration amplitudes of the two vibrations may be the same. When the directions of the two vibrations on the second area are opposite at the same time, the vibration of the second area can be suppressed; further, the vibration amplitude of the two vibrations is the same at this time, and the vibration of the second region can be eliminated.
It should be noted that, according to the method for outputting voice provided by the embodiment of the present invention, when the terminal device can output voice by controlling the first area in the screen to vibrate and emit voice, the terminal device can suppress or even eliminate the second area in the screen to vibrate and emit voice and output voice. Therefore, the privacy of the voice output process of the terminal equipment is further improved.
In one possible implementation, when a user listens to speech using the terminal device in the earpiece mode, the mapped area of the user's ear on the screen of the terminal device may change. Therefore, in order to ensure the privacy of the voice output process of the terminal device, the terminal device can detect the area mapped by the user's ear on the screen, namely the area corresponding to the vocalization list closest to the user's ear in real time, and output the voice through the area.
Specifically, the method for outputting speech provided in the embodiment of the present invention, after the above S202, may further include S204-S206:
s204, the terminal device respectively obtains corresponding distance values for each second sound production unit in the at least one second sound production unit to obtain at least one distance value.
The distance value is used for representing the distance between one sound production unit and the ear of the user, and the at least one second sound production unit is the other sound production unit except the first sound production unit in the at least two sound production units.
S205, the terminal equipment determines a minimum second distance value from at least one distance value.
And S206, under the condition that the second distance value is smaller than the first distance value, the terminal equipment outputs voice through the voice generating unit corresponding to the second distance value.
For example, the sound emitting unit corresponding to the second distance value may be the sound emitting unit 304 shown in (a) of fig. 3.
Similarly, the descriptions of S204 to S206 in the embodiments of the present invention may refer to the descriptions of S201a, S201b, and S201c in the above embodiments, and are not repeated herein.
It should be noted that, in the embodiment of the present invention, the terminal device may determine, in real time, a sound generating unit closest to an ear of the user in the terminal device, and may control the sound generating unit to vibrate to drive the screen to vibrate and generate sound to output voice. In this way, when a user listens to voice using the terminal device, even if the area of the user's ear mapped on the screen of the terminal device changes, the terminal device can control the sound production unit closest to the user's ear to produce sound by vibration. Therefore, the privacy of the voice output process of the terminal equipment can be improved, and meanwhile, the user can be ensured to hear the voice.
In a possible implementation manner, in the method for outputting a speech according to the embodiment of the present invention, the step S201 may be replaced by the step S207:
s207, under the condition that the terminal device is to output voice and the voice output mode of the terminal device is the earphone mode, the terminal device determines a first sound generating unit from at least two sound generating units.
The condition of the terminal device to output the voice comprises at least one of the following conditions 1 and 2:
condition 1: the terminal device and other terminal devices communicate by voice.
Optionally, the terminal device determines whether the condition 1 is satisfied, and may:
the terminal device responds to the voice call request. For example, the terminal device receives an input for answering the call triggered when the system call or the third-party network call is received.
Or the terminal equipment initiates a voice call request. For example, the terminal device initiates the input of a dial-out call triggered when a system call or a third-party network call is initiated.
Or the terminal equipment responds to the voice call request. For example, when the terminal device is in a system call or a third-party network call and the voice output mode is a speaker mode, the terminal device receives an input triggered by a user to switch the voice output mode of the terminal device to an earphone mode.
Specifically, the voice to be output by the terminal device is a voice in a voice communication process with other terminal devices.
Condition 2: and the user triggers the terminal equipment to play voice.
It will be appreciated that one or more voices are stored in a voice player (e.g., voice memo) or communication application in the terminal device.
Optionally, the terminal device determines whether the condition 2 is satisfied, and may perform the following steps: the terminal equipment receives the input of the voice which is triggered by the user to output by the terminal equipment and stored by the terminal equipment.
It can be understood that the terminal device meets the condition of the voice to be output, that is, the terminal device is required by the user to output the voice in a private mode.
It should be noted that, in the method for outputting a voice according to the embodiment of the present invention, because the terminal device can determine whether a condition of the voice to be output is satisfied, when the terminal device satisfies the condition, the terminal device controls the sound generating unit closest to the ear of the user to vibrate to drive the screen to generate sound and output the voice, and when the terminal device does not satisfy the condition, the terminal device may not trigger the terminal device to determine the sound generating unit closest to the ear of the user. Therefore, the privacy of the terminal equipment for outputting the voice can be improved, and the resources of the terminal equipment can be saved.
In a possible implementation manner, as shown in fig. 6, a schematic diagram of a possible structure of a terminal device is provided for the embodiment of the present invention. At least two sound emitting units are provided below the screen of the terminal device 60 shown in fig. 6; the terminal device 60 includes: a determination module 601 and a first output module 602; a determining module 601, configured to determine a first sound generating unit from the at least two sound generating units, where the first sound generating unit is a sound generating unit closest to an ear of a user in the at least two sound generating units; a first output module 602, configured to output the voice through the first sound-emitting unit determined by the determining module 601.
Optionally, the determining module 601 is specifically configured to obtain, for each of the at least two sound generating units, a corresponding distance value respectively to obtain at least two distance values, where the distance value corresponding to each sound generating unit is used to indicate a distance between the sound generating unit and an ear of a user; determining a minimum first distance value from the at least two distance values; and determining the sound production unit corresponding to the first distance value as a first sound production unit.
Optionally, the terminal device 60 further includes: a second output module; a second output module, configured to, after the determining module 601 determines the first sound generating unit from the at least two sound generating units, output a suppressed speech through each of the at least one second sound generating units, respectively, so as to output at least one suppressed speech, where a phase of each suppressed speech is opposite to a phase of the speech; the at least one second sound generating unit is the other sound generating units except the first sound generating unit in the at least two sound generating units.
Optionally, at least two sounding units are uniformly distributed below the screen of the terminal device; the first output module 602 is specifically configured to control the first sound emitting unit to convert the received first voice signal into mechanical vibration, so as to drive a first area corresponding to the first sound emitting unit on the screen to vibrate and emit sound, and output voice from the first area; the second output module is specifically used for controlling each second sound generating unit in the at least one second sound generating unit to convert the received second voice signal into mechanical vibration so as to drive a second area corresponding to the second sound generating unit on the screen to vibrate and generate sound, and output a suppressed voice, and further output at least one suppressed voice; wherein the phase of the first voice signal is opposite to the phase of the second voice signal.
Optionally, the determining module 601 is further configured to, after the first output module 602 controls the first sound generating unit to output the voice, respectively obtain corresponding distance values for each second sound generating unit in at least one second sound generating unit to obtain at least one distance value, where the at least one second sound generating unit is another sound generating unit except the first sound generating unit in the at least two sound generating units; determining a minimum second distance value from the at least one distance value;
the first output module 602 is further configured to, when the second distance value determined by the determining module 601 is smaller than the first distance value, output the voice through a sound generating unit corresponding to the second distance value.
Optionally, the determining module 601 is specifically configured to determine the first sound generating unit from the at least two sound generating units when the terminal device is to output a voice and a voice output mode of the terminal device is an earphone mode; the condition of the terminal device 60 to output the voice includes at least one of the following: the terminal device 60 communicates with other terminal devices by voice and the user triggers the terminal device 60 to play voice.
Optionally, each of the at least two sound units is made of a piezoelectric material.
The terminal device 60 provided in the embodiment of the present invention can implement each process implemented by the terminal device in the foregoing method embodiments, and for avoiding repetition, details are not described here again.
It should be noted that, in the terminal device provided in the embodiment of the present invention, at least two sound emitting units are provided in the terminal device. Specifically, the terminal device may determine a first sound generating unit from the at least two sound generating units, where the first sound generating unit is a sound generating unit closest to the ear of the user from the at least two sound generating units; and outputs the voice through the first sound-emitting unit. Based on the scheme, the terminal equipment controls the first sound emitting unit closest to the ear of the user to output the voice, so that the terminal equipment mainly outputs the voice through sound emission of the area where the first sound emitting unit is located in the terminal equipment, namely, the area except the area in the terminal equipment can not emit the sound, and other users are not easy to hear the voice output by the terminal equipment. Therefore, the privacy of the voice output process of the terminal equipment can be improved.
Fig. 7 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, power supply 111, and piezoelectric module 112. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 7 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The processor 110 is configured to determine a first sound generating unit from the at least two sound generating units, where the first sound generating unit is a sound generating unit closest to the ear of the user from the at least two sound generating units; and the piezoelectric module 112 is configured to output voice through the first sound emitting unit determined by the determining module 601.
It should be noted that, in the terminal device provided in the embodiment of the present invention, at least two sound emitting units are provided in the terminal device. Specifically, the terminal device may determine a first sound generating unit from the at least two sound generating units, where the first sound generating unit is a sound generating unit closest to the ear of the user from the at least two sound generating units; and outputs the voice through the first sound-emitting unit. Based on the scheme, the terminal equipment controls the first sound emitting unit closest to the ear of the user to output the voice, so that the terminal equipment mainly outputs the voice through sound emission of the area where the first sound emitting unit is located in the terminal equipment, namely, the area except the area in the terminal equipment can not emit the sound, and other users are not easy to hear the voice output by the terminal equipment. Therefore, the privacy of the voice output process of the terminal equipment can be improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system. For example, in the embodiment of the present invention, the terminal device may implement voice communication with other terminal devices according to the radio frequency unit 101, for example, initiate a communication request to the other terminal devices through the radio frequency unit 101.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. Specifically, the voice to be output in the terminal device may be input into the terminal device by the user through the input unit 104.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Specifically, the screen in the terminal device in the embodiment of the present invention may be implemented by the display unit 106.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 7, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein. That is, the screen in the embodiment of the present invention may be implemented by integrating the touch panel 1071 and the display panel 1061.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110. It is understood that the determining module (e.g., the determining module 601) and the control module (e.g., the control module 602) in the embodiment of the present invention may be integrated in the processor 110. In addition, the CPU of the terminal device mentioned in the above embodiments may also be implemented by the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
Optionally, in this embodiment of the present invention, the piezoelectric module 112 included in the terminal device may include at least two sound units made of piezoelectric materials, for example, the at least two sound units may be disposed below the display unit 106. Wherein the piezo module may be coupled to the processor 110. Specifically, in the embodiment of the present invention, the terminal device may vibrate through the piezoelectric module 112 to drive the display unit 106 to vibrate and generate sound, so as to output voice. Of course, although not shown in fig. 7, the piezoelectric module 112 may also be connected with the display unit 106.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program is executed by the processor 110 to implement the processes of the foregoing method embodiments, and can achieve the same technical effects, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. The method for outputting the voice is characterized by being applied to terminal equipment, wherein the terminal equipment is provided with at least two sounding units which are uniformly distributed below a screen, each sounding unit corresponds to one area in the screen, and one sounding unit is used for driving one area corresponding to the sounding unit to vibrate and sound;
determining a first sound generating unit from the at least two sound generating units, wherein the first sound generating unit is the sound generating unit which is closest to the ear of the user in the at least two sound generating units;
outputting voice through the first sound-emitting unit;
and outputting a suppressed voice through each second sounding unit in at least one second sounding unit, wherein the at least one second sounding unit is the other sounding units except the first sounding unit in the at least two sounding units.
2. The method of claim 1, wherein the determining a first sound emitting unit from the at least two sound emitting units comprises:
respectively acquiring corresponding distance values for each of the at least two sound generating units to obtain at least two distance values, wherein the distance value corresponding to each sound generating unit is used for expressing the distance between the sound generating unit and the ear of the user;
determining a minimum first distance value from the at least two distance values;
and determining the sound production unit corresponding to the first distance value as the first sound production unit.
3. The method according to claim 1 or 2,
the phase of each suppressed voice is opposite to the phase of the voice.
4. The method of claim 3,
the outputting of the voice through the first sound-emitting unit includes:
controlling the first sound-emitting unit to convert the received first voice signal into mechanical vibration so as to drive a first area corresponding to the first sound-emitting unit on the screen to vibrate and emit sound, and outputting the voice from the first area;
the outputting of a suppressed speech sound by each of the at least one second sound generation unit to output at least one suppressed speech sound includes:
controlling each second sound generating unit in the at least one second sound generating unit to convert the received second voice signal into mechanical vibration so as to drive a second area corresponding to the second sound generating unit on the screen to vibrate and generate sound, so as to output a suppressed voice, and further output the at least one suppressed voice;
wherein a phase of the first voice signal is opposite to a phase of the second voice signal.
5. The method of claim 2, wherein after outputting the speech via the first sound-emitting unit, further comprising:
respectively acquiring corresponding distance values for each second sound generating unit in at least one second sound generating unit to obtain at least one distance value, wherein the at least one second sound generating unit is other sound generating units except the first sound generating unit in the at least two sound generating units;
determining a minimum second distance value from the at least one distance value;
and under the condition that the second distance value is smaller than the first distance value, outputting the voice through a sound production unit corresponding to the second distance value.
6. The method of claim 1, wherein the determining a first sound emitting unit from the at least two sound emitting units comprises:
determining the first sound generating unit from the at least two sound generating units under the condition that the terminal equipment is to output voice and the voice output mode of the terminal equipment is an earphone mode;
the condition of the terminal device to output the voice comprises at least one of the following conditions: the terminal equipment communicates with other terminal equipment in a voice mode, and a user triggers the terminal equipment to play voice.
7. The terminal equipment is characterized by being applied to terminal equipment, wherein the terminal equipment is provided with at least two sounding units which are uniformly distributed below a screen, each sounding unit corresponds to one area in the screen, and one sounding unit is used for driving one area corresponding to the sounding unit to vibrate and sound;
the terminal device includes: the device comprises a determining module, a first output module and a second output module;
the determining module is configured to determine a first sound generating unit from the at least two sound generating units, where the first sound generating unit is a sound generating unit closest to an ear of a user from the at least two sound generating units;
the first output module is used for outputting voice through the first sound-emitting unit determined by the determination module;
the second output module is used for the determining module to output a suppressed voice through each second sounding unit in at least one second sounding unit after determining the first sounding unit in the at least two sounding units, and the at least one second sounding unit is the other sounding units except the first sounding unit in the at least two sounding units.
8. The terminal device according to claim 7, wherein the determining module is specifically configured to obtain, for each of the at least two sound generating units, a corresponding distance value respectively to obtain at least two distance values, where the distance value corresponding to each sound generating unit is used to represent a distance between the sound generating unit and an ear of the user; determining a minimum first distance value from the at least two distance values; and determining the sound production unit corresponding to the first distance value as the first sound production unit.
9. A terminal device according to claim 7 or 8, characterised in that the phase of each suppressed speech is opposite to the phase of the speech.
10. A terminal device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method of outputting speech according to any one of claims 1 to 6.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of outputting speech according to any one of claims 1 to 6.
CN201810877952.XA 2018-08-03 2018-08-03 Method for outputting voice and terminal equipment Active CN109089192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810877952.XA CN109089192B (en) 2018-08-03 2018-08-03 Method for outputting voice and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810877952.XA CN109089192B (en) 2018-08-03 2018-08-03 Method for outputting voice and terminal equipment

Publications (2)

Publication Number Publication Date
CN109089192A CN109089192A (en) 2018-12-25
CN109089192B true CN109089192B (en) 2021-01-15

Family

ID=64833564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810877952.XA Active CN109089192B (en) 2018-08-03 2018-08-03 Method for outputting voice and terminal equipment

Country Status (1)

Country Link
CN (1) CN109089192B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188370A (en) * 2019-07-02 2021-01-05 中兴通讯股份有限公司 Mobile terminal and sound production control method thereof
CN110620837B (en) * 2019-10-25 2021-12-14 维沃移动通信有限公司 Receiver control method and electronic equipment
CN112911466B (en) * 2019-11-19 2023-04-28 中兴通讯股份有限公司 Method and device for selecting sound receiving unit, terminal and electronic equipment
CN111314513A (en) * 2020-02-25 2020-06-19 Oppo广东移动通信有限公司 Ear protection control method of electronic equipment and electronic equipment with same
CN114466097A (en) * 2021-08-10 2022-05-10 荣耀终端有限公司 Mobile terminal capable of preventing sound leakage and sound output method of mobile terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203377920U (en) * 2013-06-04 2014-01-01 瑞声科技(南京)有限公司 Mobile terminal
CN106817660A (en) * 2017-03-31 2017-06-09 奇酷互联网络科技(深圳)有限公司 A kind of sound-producing device, sounding control method and portable electric appts
CN107592592A (en) * 2017-07-28 2018-01-16 捷开通讯(深圳)有限公司 Display panel, mobile terminal and screen sounding control method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA1338084C (en) * 1988-06-09 1996-02-27 Akira Okaya Multidimensional stereophonic sound reproduction system
JP2006197047A (en) * 2005-01-12 2006-07-27 Nec Corp Mobile communication terminal, warming screen display method used therefor, and program thereof
CN201234277Y (en) * 2008-06-26 2009-05-06 宇龙计算机通信科技(深圳)有限公司 Mobile phone display screen
DE112011103546T5 (en) * 2010-10-20 2013-11-07 Yota Devices Ipr Ltd. Wireless network division means
CN103778909B (en) * 2014-01-10 2017-03-01 瑞声科技(南京)有限公司 Screen sonification system and its control method
CN104469632B (en) * 2014-11-28 2018-02-02 华勤通讯技术有限公司 A kind of vocal technique and sound-producing device
CN106856582B (en) * 2017-01-23 2019-08-27 瑞声科技(南京)有限公司 The method and system of adjust automatically sound quality
CN106954142A (en) * 2017-05-12 2017-07-14 微鲸科技有限公司 Orient vocal technique, device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203377920U (en) * 2013-06-04 2014-01-01 瑞声科技(南京)有限公司 Mobile terminal
CN106817660A (en) * 2017-03-31 2017-06-09 奇酷互联网络科技(深圳)有限公司 A kind of sound-producing device, sounding control method and portable electric appts
CN107592592A (en) * 2017-07-28 2018-01-16 捷开通讯(深圳)有限公司 Display panel, mobile terminal and screen sounding control method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
索尼OLED电视A1发布;王珊阳;《计算机与网络》;20170605;第25页 *

Also Published As

Publication number Publication date
CN109089192A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN109089192B (en) Method for outputting voice and terminal equipment
CN110752980B (en) Message sending method and electronic equipment
CN108874357B (en) Prompting method and mobile terminal
CN108008858B (en) Terminal control method and mobile terminal
CN110058836B (en) Audio signal output method and terminal equipment
CN111083684A (en) Method for controlling electronic equipment and electronic equipment
CN109639863B (en) Voice processing method and device
CN109407832B (en) Terminal device control method and terminal device
CN107765251B (en) Distance detection method and terminal equipment
CN108681413B (en) Control method of display module and mobile terminal
CN109257505B (en) Screen control method and mobile terminal
CN108810198A (en) Sounding control method, device, electronic device and computer-readable medium
CN107749306B (en) Vibration optimization method and mobile terminal
WO2019154322A1 (en) Incoming call processing method and mobile terminal
WO2021238844A1 (en) Audio output method and electronic device
CN109951584B (en) Cleaning method and mobile terminal
CN110677770B (en) Sound production control method, electronic device, and medium
CN110505335A (en) Sounding control method, device, electronic device and computer-readable medium
CN108769364A (en) Call control method, device, mobile terminal and computer-readable medium
CN109451154B (en) Method for setting multimedia file and terminal equipment
CN109068276B (en) Message conversion method and terminal
CN111309392A (en) Equipment control method and electronic equipment
CN110851106A (en) Audio output method and electronic equipment
CN108430025A (en) A kind of detection method and mobile terminal
CN111031173B (en) Incoming call processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant