CN107306313B - Audio apparatus and control method thereof - Google Patents

Audio apparatus and control method thereof Download PDF

Info

Publication number
CN107306313B
CN107306313B CN201710256391.7A CN201710256391A CN107306313B CN 107306313 B CN107306313 B CN 107306313B CN 201710256391 A CN201710256391 A CN 201710256391A CN 107306313 B CN107306313 B CN 107306313B
Authority
CN
China
Prior art keywords
unit
vibration
audio device
audio
helmet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710256391.7A
Other languages
Chinese (zh)
Other versions
CN107306313A (en
Inventor
金在沅
朴宰兴
金善基
朴云姬
李东在
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020160125992A external-priority patent/KR101824925B1/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN107306313A publication Critical patent/CN107306313A/en
Application granted granted Critical
Publication of CN107306313B publication Critical patent/CN107306313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B06GENERATING OR TRANSMITTING MECHANICAL VIBRATIONS IN GENERAL
    • B06BMETHODS OR APPARATUS FOR GENERATING OR TRANSMITTING MECHANICAL VIBRATIONS OF INFRASONIC, SONIC, OR ULTRASONIC FREQUENCY, e.g. FOR PERFORMING MECHANICAL WORK IN GENERAL
    • B06B1/00Methods or apparatus for generating mechanical vibrations of infrasonic, sonic, or ultrasonic frequency
    • B06B1/02Methods or apparatus for generating mechanical vibrations of infrasonic, sonic, or ultrasonic frequency making use of electrical energy
    • B06B1/04Methods or apparatus for generating mechanical vibrations of infrasonic, sonic, or ultrasonic frequency making use of electrical energy operating with electromagnetism
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • AHUMAN NECESSITIES
    • A42HEADWEAR
    • A42BHATS; HEAD COVERINGS
    • A42B3/00Helmets; Helmet covers ; Other protective head coverings
    • A42B3/04Parts, details or accessories of helmets
    • A42B3/30Mounting radio sets or communication systems
    • A42B3/306Audio entertainment systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B17/00Measuring arrangements characterised by the use of infrasonic, sonic or ultrasonic vibrations
    • G01B17/02Measuring arrangements characterised by the use of infrasonic, sonic or ultrasonic vibrations for measuring thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H1/00Measuring characteristics of vibrations in solids by using direct conduction to the detector
    • G01H1/04Measuring characteristics of vibrations in solids by using direct conduction to the detector of vibrations which are transverse to direction of propagation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Electromagnetism (AREA)
  • Mechanical Engineering (AREA)
  • Environmental & Geological Engineering (AREA)
  • Multimedia (AREA)
  • Helmets And Other Head Coverings (AREA)
  • Details Of Audible-Bandwidth Transducers (AREA)

Abstract

An audio apparatus and a control method thereof are provided. The audio device includes a vibration unit for providing audio using vibration, a sensing unit for sensing at least one of a material and a thickness of an object to which the audio device is attached, a pressure adjustment unit for adjusting pressure applied to the object by the vibration unit, and a processor for controlling the pressure adjustment unit based on the at least one of the thickness and the material of the object sensed by the sensing unit such that the pressure applied to the object by the vibration unit is adjusted.

Description

Audio apparatus and control method thereof
Technical Field
The present disclosure relates to an audio device and a control method thereof, and more particularly, to an audio device that is attached to an article worn by a user and provides audio through vibration, and a control method thereof.
Background
Recently, people wear helmets in various places. For example, a worker working in various environments such as a construction site, a logging site, a factory, an airplane operation, an equipment or a vehicle operation must wear a helmet. Moreover, with the diversification of amateur life such as skiing, inline skating, skateboarding, bicycles, motorcycles, amusement parks, etc., the case where ordinary persons need to wear a helmet frequently occurs.
As described above, although there is a demand for a helmet in various environments, ears are shielded when using the helmet, and in a case where surrounding noise is very serious, it is generally common that the output audio cannot be listened to accurately. Furthermore, hands are often not free to use when using the helmet.
To solve these problems, an audio device attached to a helmet and providing audio by vibration is provided. However, since the sound quality or volume of audio provided by vibration is different according to various factors such as the kind or thickness of the helmet, there is a problem in that it is difficult to provide the user with the optimal sound quality.
Disclosure of Invention
Solves the technical problem
An object of the present disclosure is to provide an audio device and a control method thereof that provide optimal audio according to characteristics of an object by sensing characteristics of the object to which the audio device is attached.
Solving means
An audio device according to an embodiment of the present disclosure includes a vibration unit to provide audio using vibration, a sensing unit to sense at least one of a material and a thickness of an object to which the audio device is attached, a pressure adjustment unit to adjust pressure applied to the object by the vibration unit, and a processor to control the pressure adjustment unit based on the at least one of the thickness and the material of the object sensed by the sensing unit such that the pressure applied to the object by the vibration unit is adjusted.
Further, a control method of an audio device including a vibration unit providing audio using vibration according to other embodiments of the present disclosure includes: sensing at least one of a material and a thickness of an object to which the audio device is attached; and adjusting the pressure applied to the object by the vibration unit based on the sensed at least one of the thickness and the material of the object.
Advantageous effects
According to one embodiment of the present invention as described above, even in a case where a user wears a helmet, it is possible to obtain optimum audio according to the characteristics of the helmet worn by the user. Further, the user can obtain a variety of user experiences through the audio device attached to the helmet.
Drawings
Fig. 1a is a diagram illustrating an audio device attached to a helmet according to an embodiment of the present disclosure;
FIG. 1b is a block diagram schematically illustrating the structure of an audio device according to an embodiment of the present disclosure;
fig. 2 is a block diagram illustrating in detail a structure of an audio apparatus according to an embodiment of the present disclosure;
fig. 3a and 3b are diagrams illustrating a structure of a vibration unit according to an embodiment of the present disclosure;
fig. 4a to 4f are diagrams for explaining a method of providing an optimal audio signal using an audio device including a vibration unit according to an embodiment of the present disclosure;
fig. 5a and 5b are diagrams for explaining a pressure adjusting unit according to an embodiment of the present disclosure;
fig. 6 to 9 are flowcharts for explaining a control method of an audio apparatus according to various embodiments of the present disclosure;
fig. 10a to 10e are diagrams illustrating an audio device including a plurality of buttons and an attachment unit of the audio device according to an embodiment of the present disclosure;
fig. 11 is a diagram illustrating a system including an audio device and a portable terminal according to an embodiment of the present disclosure; and
fig. 12 is a diagram illustrating a system including a plurality of audio devices according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, terms used in the present specification will be briefly described, and the present invention will be specifically described.
General terms used so far have been selected for terms used in embodiments of the present invention in consideration of their roles in the present invention as much as possible, but they may be changed according to the intention or examples of those skilled in the art, the emergence of new technology, and the like. In addition, in a specific case, there is also a term arbitrarily selected by the applicant, and in this case, the meaning thereof will be described in detail in the description section of the related invention. Accordingly, the terms used in the present invention are not simple term names, but should be defined based on the meanings of the terms and the overall contents of the present invention.
While embodiments of the invention are susceptible to various modifications and alternative embodiments, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. However, the scope of the present invention is not limited to the specific embodiments, and the present invention should be understood to include all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention. In the description of the embodiments, when it is judged that a detailed description of a related well-known technology may obscure a gist, a detailed description thereof will be omitted.
The terms first, second, etc. may be used to describe various structural elements, but the structural elements should not be limited by these terms. These terms are used only for the purpose of distinguishing one constituent element from other constituent elements.
The singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise. In the present application, terms such as "including" or "configured to" are intended to designate features, numerals, steps, operations, constituent elements, components, or combinations thereof described in the specification, and thus should be understood as not to preclude the presence or addition of one or more other features or numerals, steps, operations, constituent elements, components, or combinations thereof in advance.
In the embodiments of the present invention, a "module" or a "unit" performs at least one function or operation, and may be implemented as hardware or software, or may be implemented as a combination of hardware and software. Also, in addition to "modules" or "units" that need to be implemented with specific hardware, a plurality of "modules" or a plurality of "units" may be integrated into at least one module to be implemented as at least one processor (not shown).
In the embodiments of the present invention, when a certain portion is referred to as being "connected" to another portion, the "connected" includes not only the case of being "directly connected" but also the case of being "electrically connected" with another element interposed therebetween. In addition, when a part is referred to as "including" a certain constituent element, it means that other constituent elements may be included, without excluding other constituent elements, unless there is explicit contrary description.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings so that those skilled in the art to which the present invention pertains can easily carry out the present invention. However, the present invention may be embodied in many different forms and is not limited to the embodiments described herein. In addition, in order to clearly describe the present invention in the drawings, portions that are not related to the description are omitted, and like reference numerals are given to like portions throughout the specification.
Hereinafter, the present disclosure will be described with reference to the accompanying drawings. Fig. 1a is a diagram illustrating an audio device attached to a helmet according to an embodiment of the present disclosure. First, as shown in fig. 1a, the audio device 100 may be attached to a specific region of the helmet 50 and provide audio to a user through vibration. At this time, although the audio device 100 may be attached to an area corresponding to an ear in the helmet 50, this is only one embodiment, which may be attached to any area of the helmet.
In particular, the audio device 100 may provide audio to the user by vibrating the helmet 50 to which the audio device 100 is attached, and the vibrated helmet 50 functions as a speaker. Specifically, the audio device 100 provides audio by vibrating the air via the medium (helmet 50) by the vibration of the vibration unit according to the electromagnetic field effect of the coil and the magnet so that the user hears the vibrating air.
Fig. 1b is a block diagram schematically illustrating the structure of an audio apparatus according to an embodiment of the present disclosure. First, the audio device 100 includes a vibration unit 110, a sensing unit 120, a pressure adjusting unit 130, and a processor 140.
The vibration unit 110 may provide audio using vibration. Specifically, the vibration element is vibrated by an electromagnetic field effect of a coil and a magnet included in the vibration unit 110, and the vibration of the vibration element may be provided to a user as audio through a medium (helmet).
The sensing unit 120 senses at least one of a material and a thickness of an object (i.e., a helmet) 50 to which the audio device 100 is attached. At this time, the sensing unit 120 may be implemented as an ultrasonic sensor. Specifically, the sensing unit 120 measures the thickness of the object 50 by the time of being reflected after the ultrasonic waves generated from the ultrasonic sensor contact the object 50, and can judge the kind of helmet of the object 50 by measuring the volume of the ultrasonic waves absorbed and the volume of the ultrasonic waves scattered when the ultrasonic waves penetrate the object 50.
The pressure adjusting unit 130 adjusts the pressure applied to the object 50 by the vibration unit 110. Specifically, the vibration unit 110 provides audio of different tone qualities according to the pressure applied to the object 50. At this time, the pressure adjusting unit 130 may provide the optimal sound quality by adjusting the pressure applied to the object 50 by the vibration unit 110.
In particular, the pressure adjusting unit 130 includes a suspension unit supporting the vibration unit 110 according to pressure data corresponding to the material and thickness of the object 50 sensed by the sensing unit 120, and a motor unit for adjusting the distance between the suspension unit and the object 50. At this time, the distance between the suspension unit and the object 50 is adjusted by the motor unit, so that the pressure applied to the object 50 by the audio device 100 can be adjusted.
The processor 140 controls the overall operation of the audio device 100. In particular, the processor 140 may control the pressure adjusting unit 130 based on at least one of the thickness and the material of the object 50 sensed by the sensing unit 120, so that the pressure applied to the object 50 by the vibration unit 110 is adjusted.
Specifically, the processor 140 may retrieve pressure data corresponding to information on the thickness and material of the object 50 sensed by the sensing unit 120 in the storage unit, and may control the motor unit based on the retrieved pressure data. That is, the processor 140 may adjust the distance of the suspension unit from the object 50 based on pre-stored information (e.g., pressure data) so that the optimum sound quality can be provided.
In particular, the processor 140 may control the pressure regulating unit 130 such that the thicker the thickness of the object 50 or the harder the material of the object 50, the higher the pressure, and the thinner the thickness of the object 50 or the softer the material of the object 50, the lower the pressure.
In addition, although in the above-described embodiment, it is described that the pressure data corresponding to the information about the object is stored in the memory, this is only one embodiment, and the pressure data corresponding to the information about the object may be stored on the external server. At this time, the audio device 100 may transmit the sensed information about the type and thickness of the object to the external server, and may receive pressure data corresponding to the type and thickness of the object from the external server.
With the audio apparatus 100 as described above, it is possible to obtain optimum audio according to the characteristics of the helmet worn by the user, regardless of the helmet worn by the user.
Fig. 2 is a block diagram illustrating in detail the structure of an audio apparatus according to an embodiment of the present disclosure. As shown in fig. 2, the audio device 100 includes a vibration unit 110, a sensing unit 120, a pressure adjusting unit 130, a microphone 150, a storage unit 160, a communication unit 170, a plurality of buttons 180, an attachment unit 190, and a processor 140. In addition, the structure of the audio apparatus 100 shown in fig. 2 is merely an example, and thus it is not necessarily limited to the described block diagram. Therefore, it is apparent that a part of the structure of the electronic device 100 shown in fig. 2 may be omitted, modified or supplemented according to the kind of the electronic device 100 or the purpose of the electronic device 100.
The vibration unit 110 provides audio by generating vibration. At this time, the vibration unit 110 may provide not only vibration corresponding to audio received from the outside but also vibration corresponding to notification audio related to various information.
In particular, the vibration unit 110 vibrates the medium (helmet 50) through the vibration element, thereby providing audio to the user. Fig. 3a and 3b are diagrams illustrating a structure of a vibration unit according to an embodiment of the present disclosure. First, as shown in fig. 3a, the vibration unit 110 includes a vibration element 111, a coil 113, a first vibration isolation plate 115, a vibration transmission plate 117, and a second vibration isolation plate 119.
The vibration element 111 generates vibration by an electromagnetic field effect with the coil 113. At this time, the vibration element 111 generates vibration and provides audio through the medium. As shown in fig. 3b, the vibration element 111 may be accommodated by the suspension unit 131. Further, the vibration element 111 may generate vibration having a vibration intensity and a vibration mode corresponding to a current flowing through the coil 113.
The coil 113 vibrates the generation of the vibration element 111 by an electromagnetic field effect with the magnet.
The first vibration isolating plate 115 can not only isolate vibration and noise generated from the outside but also prevent vibration generated in the vibration element 111 from being diffused to the outside. That is, the first vibration isolation plate 115 can isolate the vibration generated by the vibration element 111 from being transmitted to the second face of the audio device opposite to the first face facing the helmet.
At this time, although the first vibration isolating plate 115 may be implemented as stainless steel (stainless steel), it is not limited thereto.
The vibration transfer plate 117 is disposed in the audio device 100 at a first face facing the helmet to be located at an area portion contacting with an external medium and transmits vibration generated in the vibration element 111 to the external medium (e.g., the helmet). At this time, as shown in fig. 3b, the vibration transfer plate 117 may contact the vibration element 111 and transfer the vibration generated in the vibration element 111 to the object 50.
The second vibration isolation plate 119 is located at a side portion of the vibration transmission plate 117, and when vibration is transmitted from the vibration transmission plate 117 to the object 50, the vibration of the vibration transmission plate 117 can be prevented from being transmitted to the outside, and the vibration of the vibration transmission plate 117 can be directed toward the object 50.
That is, with the structure as shown in fig. 3a, a user can remove noise, which may be generated during the transmission of vibration, through the first and second vibration isolation plates 115 and 119, and make only the user hear audio.
Referring back to fig. 2, the sensing unit 120 obtains sensing values through various sensors. In particular, the sensing unit 120 may obtain a sensing value for sensing at least one of a material and a thickness of the object 50 to which the audio device 100 is attached. Specifically, the sensing unit 120 may be implemented as an ultrasonic sensor, and an ultrasonic flaw detection method may be employed to obtain a sensing value for sensing at least one of the material and the thickness of the object 50 to which the audio device 100 is attached. In this case, the ultrasonic flaw detection method is a method of measuring the shape of an object by comparing the difference between the energy reflected from a discontinuous portion in a test body, the time for which the transmitted ultrasonic wave penetrates the test body and is reflected from the discontinuous portion, and the amount of loss when the ultrasonic wave penetrates the test body, with appropriate standard data (standard data), as a method of transmitting the ultrasonic wave into the test body and detecting the discontinuity existing in the test body. That is, the processor 140 may know the cross-sectional view of the object 50 based on the sensing values obtained by the ultrasonic sensor.
Specifically, as shown in fig. 4a, when the ultrasonic wave 410 generated by the ultrasonic sensor contacts the object 50 and is reflected 420, the sensing unit 120 may measure the time (Δ t) after being reflected after being generated by the ultrasonic sensor and contacting the medium, thereby measuring the thickness of the object 50. At this time, the thickness of the object 50 may be determined by the following mathematical formula 1.
[ mathematical formula 1 ]
Figure BDA0001273505440000071
At this time, D is the thickness of the object 50, Δ t is the time for the ultrasonic wave to be reflected, and α is the propagation velocity coefficient according to the material of the object 50. For example, the propagation velocity coefficient of the object 50 composed of the industrial plastic is 1.4 to 2.4mm/us, and the propagation velocity coefficient of the object 50 in which polystyrene is mixed with the sports plastic is 2.388 mm/us. The propagation velocity coefficient according to the kind of the object (i.e., material) may be measured in advance and stored in the storage unit 160.
Further, when the ultrasonic waves generated in the ultrasonic sensor penetrate the object 50, the processor 140 may measure the volume of the absorbed ultrasonic waves and the volume of the scattered ultrasonic waves and confirm the thickness of the object 50, and then compare it with the pre-stored sample data, thereby knowing the kind of the helmet.
Specifically, as shown in fig. 4e, the transmission rate (i.e., the ratio of the volume of the absorbed ultrasonic waves to the volume of the scattered ultrasonic waves) according to the type of each object 50 may be stored in the storage unit 160. At this time, the storage unit 160 may pre-store the value measured by the laboratory for the transmittance.
In addition, the sensing unit 120 may measure the volume of the ultrasonic waves absorbed by the object 50 and the volume of the scattered ultrasonic waves, and the processor 140 may determine the type of the object 50 by comparing the ratio of the measured volume of the absorbed ultrasonic waves to the volume of the scattered ultrasonic waves and the pre-stored transmittance.
In addition, according to an embodiment of the present invention, the ultrasonic sensor may be implemented using one of a vertical probe, a rectangular probe, and an airborne ultrasonic sensor. Further, according to an embodiment of the present invention, as shown in fig. 4b, the ultrasonic sensor 430 may be located at a side of the vibration unit 110. That is, by the ultrasonic sensor 430 being located at the maximum proximity to the vibration unit 110, the processor 140 can determine the type and thickness of the helmet 50 with respect to the object in the area corresponding to the vibration unit 110. Further, the sensing unit 120 may include various sensors in addition to the ultrasonic sensor. Specifically, the sensing unit 120 may include a noise sensor for sensing external noise, and may include a motion sensor for sensing a motion of the audio device 100, a motion sensor for sensing a user motion, and the like. In addition, the sensing unit 120 may include various sensors for sensing the surrounding environment of the audio device 100.
In addition, according to an embodiment of the present invention, as shown in fig. 4c, the shielding unit 440 may be disposed around the vibration unit 110. That is, as shown in fig. 4d, the shielding unit 440 may prevent audio from being transmitted to the outside by pressing the vibration unit 110 against the medium (i.e., helmet) 50. At this time, the shielding unit 440 may be implemented in a rubber or silicon material. The pressure adjusting unit 130 is a structure for adjusting the pressure applied to the object 50 by the vibration unit 110. Audio of different tone qualities is provided according to the pressure applied to the object 50 by the vibration unit 110 or the area of the vibration unit 110 in contact with the object 50.
Therefore, in order to provide the optimal sound quality, the pressure adjusting unit 130 may mechanically adjust the pressure applied to the object 50 by the vibration unit 110 according to the control of the processor 140.
Specifically, as shown in fig. 5a, the pressure adjusting unit 130 includes a suspension unit 131, a motor unit 133, and a worm wheel 135. At this time, the suspension unit 131 may support the vibration unit 110 and alleviate the impact applied to the vibration unit 110. Further, the distance of the suspension unit 131 from the object 50 may also be adjusted by the motor unit 133 and the worm wheel 135. The motor unit 133 operates the worm wheel 135 according to the control of the processor 140, thereby adjusting the distance between the object 50 and the suspension unit 131. That is, when the worm wheel 135 is operated toward a first direction (e.g., a lower direction), the motor unit 133 may be adjusted such that the distance between the object 50 and the suspension unit 131 becomes longer. In addition, when the worm wheel 135 is operated toward the second direction (e.g., a lower direction), the motor unit 133 may be adjusted such that the distance between the object 50 and the suspension unit 131 becomes shorter. That is, the pressure adjusting unit 130 may adjust the pressure applied to the object 50 by the vibration unit 110 by adjusting the distance between the object 50 and the suspension unit 131.
In addition, although it is described in the above embodiment that the pressure adjusting unit 130 automatically adjusts the distance between the suspension unit 131 and the object 50 through the processor 140, this is only one embodiment, and the distance between the suspension unit 131 and the object 50 may be manually adjusted in order to obtain the optimal sound for the user during the emission of the standard sound wave to the inside of the helmet. At this time, as shown in fig. 5b, in order to adjust the distance between the object 50 and the suspension unit 131, the pressure adjusting unit 130 may include a knob 137. However, this is only one embodiment, and the distance between the object 50 and the suspension unit 131 may be manually adjusted by various manual means such as a button.
Further, the pressure adjusting unit 130 may include both the motor unit 133 and the knob 137. At this time, when the automatic mode is set by the user, the pressure adjusting unit 130 may adjust the pressure applied to the object 50 by the vibration unit 110 using the motor unit 133. In addition, when the manual mode is set by the user (or when the kind or thickness of the pre-stored object cannot be known through the sensing value obtained by the sensing unit 120), the pressure adjusting unit 130 may adjust the pressure applied to the object 50 by the vibration unit 110 using the knob 137.
In addition, although the pressure adjusting unit 130 may adjust the pressure applied to the object 50 by the vibration unit 110, this is only one embodiment, and it may also adjust the area of the vibration unit 110 in contact with the object 50. That is, the pressure adjusting unit 130 may adjust the area of the vibration unit 110 in contact with the object 50 by adjusting the distance between the object 50 and the suspension unit 131.
The microphone 150 receives the user's voice. In particular, the user voice received through the microphone 150 may be transmitted to an external terminal through the communication unit 170.
In addition, although the microphone 150 may be disposed inside the audio device 100, this is only one embodiment, and it may also be implemented as a microphone having a form of wired or wireless connection with the audio device 100.
In addition, in order to isolate noise such as wind noise, the microphone 150 may have a wind shield (windscreen) using a thin mesh such as sponge or silk stockings.
The storage unit 160 stores various modules for driving the audio device 100. For example, the storage unit 160 stores therein software including a base module, a sensing module, a communication module, a service module, and the like. At this time, the base module is a base module that processes signals transferred from each hardware included in the audio device 100 and transfers them to an upper module. The sensing module is a module that collects information from various sensors, and analyzes and manages the collected information. The service module is a module that performs various services based on the collected sensing information.
As described above, although the storage unit 160 may include various program modules, it is apparent that various program modules may be omitted in part or modified or added according to the kind and characteristics of the audio device 100. For example, in the case where the audio device 100 includes a display, the storage unit 160 may include various modules such as an interface (UI) module, a presentation module.
Further, the storage unit 160 stores standard data for determining the kind of the object 50. That is, in order to determine the kind of the object 50, the storage unit 160 may map and store information on the ultrasonic wave transmittance and information on the type of the object.
In addition, the storage unit 160 stores pressure data that provides optimal audio according to the thickness and kind of the object 50. For example, as shown in table 1 below, the storage unit 160 may store a table mapping the type and thickness of the object with the pressure data.
[ TABLE 1 ]
Figure BDA0001273505440000101
Figure BDA0001273505440000111
According to other embodiments of the present invention, the storage unit 160 may match and store the thickness and kind of the object 50 with other data (e.g., data on the distance between the object 50 and the suspension unit 131, the moving distance data of the worm wheel 135, etc.), in addition to the pressure data for controlling the motor unit 133. At this time, the thickness and kind of the object 50 and the pressure data or other data information for controlling the motor unit 133 may be sample data measured in a laboratory to provide the optimal sound quality.
In addition, the storage unit 160 may be implemented as a nonvolatile memory or a volatile memory.
The communication unit 170 is a structure that communicates with various types of external devices by various types of communication means. The communication unit 170 may include various communication chips such as a WiFi chip, a bluetooth chip, an NFC chip, and a wireless communication chip. At this time, the WiFi chip, the bluetooth chip, and the NFC chip perform communication in a WiFi mode, a bluetooth mode, and an NFC mode, respectively. The NFC chip refers to a chip operating in an NFC (near Field communication) mode using a 13.56MHz frequency band in various RF-ID frequency bands such as 135kHz, 13.56MHz, 433MHz, 860-960 MHz, 2.45GHz, and the like. When using a WiFi chip or a bluetooth chip, various connection information such as SSID and session key may be transceived first, and various information may be transceived after a communication connection is made using the same. The wireless communication chip refers to a chip that performs communication according to various communication specifications such as IEEE, zigbee, 3G (3rd Generation; third Generation mobile communication), 3GPP (3rd Generation Partnership Project), LTE (Long Term evolution), and the like.
In particular, the audio device 100 can directly make a phone call with an external terminal through the wireless communication module. Further, the audio apparatus 100 can connect with a portable terminal of a helmet wearer existing nearby in bluetooth communication and make a phone call with an external terminal. That is, the audio device 100 may receive audio data from an external terminal through the portable terminal, and may transmit sound input to a user of the audio device 100 to the portable terminal.
Further, the communication unit 170 may receive notification information (e.g., short message notification, etc.) from the portable terminal of the wearer, and the processor 140 may control the vibration unit 110 to output an audio notification based on the received notification information.
The plurality of buttons 180 may be physically implemented on the audio device 100. At this time, the plurality of buttons 180 may correspond to a plurality of functions and a plurality of contact addresses, respectively. For example, as shown in fig. 10a, the audio device 100 may include a first button 1010, a second button 1020, and a third button 1030.
At this time, the first button 1010 may correspond to a first function (e.g., a communication connection function) and may correspond to a first contact address. In addition, the second button 1020 may correspond to a second function (e.g., a remaining battery level confirmation function) and may correspond to a second contact address. Further, the third button 1030 corresponds to a third function (e.g., a current time and weather information notification function), and may correspond to a third contact address.
At this time, when the user presses one of the plurality of buttons 1010, 1020, 1030, the processor 140 may compare the time when the user presses the button with a preset time and perform a function corresponding to the pressed button, or may request a call from a contact address corresponding to the pressed button. For example, the processor 140 may perform the communication connection function when sensing that the input is that the user presses the first button 1010 for a time shorter than a preset time, and the processor 140 may request a phone call to the external portable terminal corresponding to the first contact address when sensing that the input is that the time when the user presses the first button 1010 reaches or exceeds the preset time. As another example, the processor 140 may request a phone call to an external portable terminal corresponding to the first contact address when an input that the user presses the first button 1010 for a time shorter than a preset time is sensed, and the processor 140 may perform the communication connection function when an input that the user presses the first button 1010 for a time reaching or exceeding the preset time is sensed.
Further, the audio device 100 may provide a notification (notification) to the user. The user notification may be an active notification (active notification) and a passive notification (passive notification). In the case of the active notification, when a user presses a specific button of the plurality of buttons 180, the notification is immediately activated and the user is informed of desired information (e.g., weather, time, battery remaining amount, etc.) through TTS (Text to Speech; Text to Speech conversion). In addition, in the case of passive notification, for example, when a user selects a service that wants to receive a notification of a message, an email, a schedule, or the like through a portable terminal application dedicated to the audio device 100 or the like, it is possible to analyze service information and provide a TTS notification to the audio device 100 when the selected service is activated. For example, when the user selects a communication application as an application specific to the audio device 100, the sender of the information and the content of the information may be provided in a TTS manner when the communication application receives the information. Such a notification function may be configured to operate corresponding to services in which the plurality of buttons 180 are respectively activated, and one button of the plurality of buttons 180 may be responsible for a plurality of service functions.
In addition, a button for adjusting the volume, a power button, and the like may be further included on the audio device 100.
In addition, indicators indicating corresponding functions may be formed on the plurality of buttons 1010, 1020, 1030. For example, a button corresponding to the call function may be formed with an indicator of a telephone set style, and a button corresponding to the text message confirmation function may be formed with an indicator of a text message style.
The attachment unit 190 is a structure that can attach the audio device 100 to an object 50 (e.g., a helmet). In particular, the attachment unit 190 may generate an electromagnet using a power source of the speaker and then attach the audio device 100 to the object 50 using the electromagnet. However, this is only one embodiment, and the attachment unit 190 may attach the audio device 100 to the object 50 using other methods.
According to an embodiment of the present invention, as shown in fig. 10b, the first attachment unit 1040 may be fixed on the helmet 50, and the second attachment unit 1050 may be fixed on the body of the audio device 100. Further, the first and second attachment units 1040 and 1050 may be attached (or connected) to each other such that the audio device 100 is attached to the helmet 50. At this time, the first attachment unit 1040 may be implemented as a double-sided tape or a magnet, etc.
According to still another embodiment of the present invention, as shown in fig. 10c, the attachment unit 190 may be implemented with a thread structure 1060, and as shown in fig. 10d, with a hook structure 1070, and as shown in fig. 10e, with a leaf spring structure 1080.
As yet another example, it may be implemented as: a clip form of clipping the clip of the vibrator adhesive in the groove of the auxiliary adhesive after attaching the adhesive type auxiliary adhesive to the object 50; as a form used when attaching the navigator to the automobile, a pressed form pressed with a rubber pad and attached to the surface of the object 50; when wearing goggles when using the object 50, a strap insertion mode is used in which the vibration speaker is built in the inside of the goggle strap near the ear portion.
The processor 140 controls the overall operation of the audio device 100 using various programs stored in the storage unit 160.
As shown in fig. 2, the processor 140 includes a RAM 141, a ROM 142, a main CPU 144, first to nth interfaces 145-1 to 145-n, and a bus 146. At this time, the RAM 141, the ROM 142, the main CPU 144, and the first to nth interfaces 145-1 to 145-n, etc. may be connected to each other through the bus 146.
The ROM 142 stores therein an instruction set for starting the system, and the like. When the power is turned on by inputting the on command, the main CPU 144 copies the O/S stored in the storage unit 160 onto the RAM 141 according to the instruction stored in the ROM 142, and operates the O/S to start the system. After completion of the startup, the main CPU 144 copies various application programs stored in the storage unit 160 onto the RAM 141, and performs various operations by running the application programs copied to the RAM 141.
The main CPU 144 accesses the storage unit 160 and performs startup using the O/S stored in the storage unit 160. Further, the main CPU 144 executes various operations using various programs, contents, data, and the like stored in the storage unit 160.
The first to nth interfaces 145-1 to 145-n are connected to the various components described above. One of the interfaces may also become a network interface that connects with an external device through a network.
Further, although not shown in fig. 2, according to an embodiment of the audio device 100, the audio device 100 may include a display for displaying various information. At this time, the display may be a touch display capable of sensing a user touch.
A control method of the audio apparatus 100 according to various embodiments of the present disclosure will be explained below with reference to fig. 6 to 9.
Fig. 6 is a flowchart illustrating a control method for enabling the audio apparatus 100 to provide audio of optimal sound quality according to an embodiment of the present disclosure.
First, the audio device 100 senses at least one of a material and a thickness of an object 50 to which the audio device 100 is attached (S610). At this time, as described above, the audio device 100 may sense at least one of the material and the thickness of the object 50 to which the audio device 100 is attached, using the ultrasonic sensor of the sensing unit 120.
Further, the audio device 100 adjusts the pressure applied to the object by the vibration unit 110 based on the sensed at least one of the thickness and the material of the object 50 (S620).
Specifically, the processor 140 may retrieve pressure data corresponding to the information about the thickness and material of the sensed object 50 in the storage unit 160. For example, in the case of a 25mm ski helmet (PC + EPP + cloth) type of sensed object, the processor 140 may retrieve the pressure data corresponding to the 25mm ski helmet as 230 g. Further, the audio device 100 may control the motor unit 133 based on the pressure data.
At this time, the processor 140 may control the pressure adjusting unit 130 such that the thicker the thickness of the object 50 or the harder the material of the object 50, the higher the pressure, and the thinner the thickness of the object 50 or the softer the material of the object 50, the lower the pressure. That is, the processor 140 may control the motor unit 133 such that the thicker the thickness of the object 50 or the harder the material of the object 50, the closer the distance between the suspension unit 131 and the object 50, and may control the motor unit 133 such that the thinner the thickness of the object 50 or the softer the material of the object 50, the farther the distance between the suspension unit 131 and the object 50. By the control method as described above, the optimum audio can be provided regardless of what helmet the audio device 100 is attached to.
In addition, according to other embodiments of the present invention, the processor 140 may vary the pressure applied to the object by the vibration unit 110 according to the frequency of the provided audio, in addition to the kind and thickness of the medium. Specifically, as shown in fig. 4f, different SPLs (Sound Pressure Level) may be provided for the Pressure applied to the object by the vibration unit 110 according to the frequency. Accordingly, the processor 140 may vary the pressure applied to the object 50 by the vibration unit 110 according to the frequency of the input audio. For example, when an audio signal of 1500kHz is provided, the processor 140 may control the pressure adjusting unit 130 such that the vibration unit 110 applies 100g of pressure to the object. Fig. 7 is a flowchart for explaining a method of controlling audio volume according to external noise according to an embodiment of the present disclosure.
First, the audio device 100 may measure external noise using a noise sensor (S710).
In addition, the audio device 100 may adjust the volume of the audio according to the external noise measured by the noise sensor (S720). Specifically, the processor 140 may adjust the volume of the audio according to the magnitude of the external noise. For example, the processor 140 may measure the volume (dB) of the noise generated in the surroundings, and may automatically adjust to an appropriate volume greater than the measured volume (dB), thereby providing an optimal volume of audio. At this time, the processor 140 may measure and store a basic user average call volume, and may provide audio having an optimal volume based on the stored user average call volume. That is, by providing audio higher than the current noise without excessively exceeding the average call volume of the user, the user is enabled to receive audio having an optimal volume even without performing an additional operation.
Fig. 8 is a flowchart for explaining a method for providing a user's voice after filtering external noise according to an embodiment of the present disclosure.
First, the audio device 100 may receive a user voice through the microphone 150 during a call with an external terminal (S810).
In addition, the audio device 100 may determine the sound wave of the external noise while inputting the user voice (S820).
In addition, the audio device 100 may generate a sound wave for canceling an external noise sound wave (S830). Specifically, the processor 140 may be able to transmit only the user's voice received via the microphone 150 to the external terminal by determining external noise in the audio received via the microphone 150, and generating and synthesizing a sound wave opposite to the sound wave of the external noise with the external noise. Thus, the user can provide the user voice of the best sound quality to the external terminal.
In addition, according to another embodiment of the present invention, the audio apparatus 100 may filter the external noise such that only audio of a specific frequency band other than the user's voice (e.g., whistling) is heard to provide the audio of the specific frequency band. Specifically, when audio reaching or exceeding a preset value is sensed in the Time domain, the audio apparatus 100 may perform frequency analysis (e.g., STFT (Short Time Fourier Transform)) on the received audio. In addition, the audio apparatus 100 can compare the frequency of the audio at the current moment with the frequency of the audio in the preset section, and determine whether to generate the audio of the specific frequency band. In addition, the audio device 100 may eliminate noise other than the audio of the specific frequency band and provide only the audio of the specific frequency band.
Fig. 9 is a flowchart for explaining a method of controlling the attachment of the audio device 100 according to the motion of the audio device 100 according to an embodiment of the present disclosure.
First, the audio device 100 may sense a motion of the audio device 100 through a motion sensor (S910). For example, the sensing unit 120 may sense a moving speed, shaking, etc. of the audio device 100 using a motion sensor.
In addition, the audio device 100 may adjust the magnetism of the electromagnet along with the movement of the audio device 100 (S920). Specifically, the audio device 100 is controlled such that the faster the movement of the audio device 100 or the greater the shaking, the stronger the magnetism of the electromagnet included in the attachment unit 190, so that the audio device 100 can control the attachment unit 190 so that it can be attached to the object 50 while maintaining a certain adhesion force.
Specifically, the audio device 100 may sense speed information of a user wearing the helmet 50, and may adjust an adhesive force of fixing (or attaching) the audio device 100 to the helmet 50 according to the speed information. For example, the helmet is a motorcycle helmet and in the case where the user moves while riding a motorcycle, the audio device 100 may increase the magnetism of the electromagnet of the attachment unit 190 when a speed up to or exceeding a threshold value is sensed, thereby enhancing the adhesion force. Therefore, even if the user moves at a high speed, the audio apparatus 100 can be fixed to the helmet 50.
Further, the audio device 100 may sense whether the user wears the helmet 50 and sense the adhesion force of the audio device 100 attached to the helmet 50. Specifically, in the case where it is sensed that the user does not wear the helmet 50, it may be adjusted to reduce the adhesive force of the audio device 100 so that the user can easily detach the audio device 100 from the helmet 50.
Further, the audio device 100 may enable easier detachment of the audio device 100 by sensing user motion and adjusting the attachment force of the audio device 100 to the helmet 50. For example, the audio device 100 increases adhesion when a first user motion is sensed (e.g., a user motion that taps the helmet once, etc.), and the audio device 100 may decrease adhesion when a second user motion is sensed (e.g., a user motion that taps the helmet twice).
In addition, according to other embodiments of the present invention, in the case where the audio device 100 is attached to the helmet 50, the audio device 100 may receive power from the helmet 50 and automatically perform wireless charging. Specifically, where the helmet 50 has a solar charging device (e.g., a solar cell), the helmet may store power received from the solar charging device. Further, the audio device 100 may perform wireless charging of the battery using the stored power. In addition, the audio device 100 may sense an emergency state (e.g., an accident, an abnormal state) of the user and transmit information of the emergency to the outside. For example, when a preset button provided on the audio device 100 is sensed, a preset user voice is sensed, abnormal bio-information is sensed, or an impact reaching or exceeding a preset value is sensed, the audio device 100 may sense that the user is in an emergency state and transmit information of the emergency state to an external device (e.g., a pre-registered device, a government agency, a hospital, etc.).
Fig. 11 is a diagram illustrating a system including the audio device 100 and the portable terminal 1100 according to an embodiment of the present disclosure. At this time, the portable terminal 1100 may be a portable terminal held by a user wearing the object 50 to which the audio device 100 is attached.
First, the audio device 100 can make a communication connection with the portable terminal 1100. At this time, the audio device 100 may make a communication connection with the portable terminal 1100 using a short range communication module such as bluetooth, zigbee, or the like.
Further, the audio device 100 may receive a phone request of an external terminal from the portable terminal 1100. At this time, a notification about the phone request may be provided in audio form through the vibration unit 110. Further, the audio device 100 may receive the counterpart voice received from the external terminal during the phone call from the portable terminal 1100. In addition, the audio device 100 may transmit a user voice input via the microphone 150 to the portable terminal 1100. That is, the audio device 100 can make a call with an external terminal through the portable terminal 1100.
In addition, the audio device 100 may receive various notification information from the portable terminal 1100. Specifically, the audio device 100 may receive various notification information such as message reception information, time notification information, schedule notification information, and the like from the portable terminal 1100.
In addition, the audio device 100 may receive information received from the portable terminal 1100 in the form of an audio signal through a TTS function. For example, when the portable terminal 1100 receives a text from the outside, the audio device 100 may receive information on a text reception event and provide the information on the text reception event in the form of a voice through the TTS module.
Further, the portable terminal 1100 may receive and provide battery information, usage time information, and location information of a user wearing another audio device of the audio device 100 from the audio device 100.
Further, when the connectable portable terminal 1100 is sensed, the audio device 100 may provide information on the connectable portable terminal 1100. At this time, although the audio device 100 may provide information in an audible form through the vibration unit 110, this is only one embodiment, and in the case where the audio device 100 has a display, information may be provided in a visual form.
Further, the audio device 100 may be communicatively connected or disconnected with the portable terminal 1100 using one of a plurality of buttons.
In addition, as shown in fig. 12, the audio apparatus 100 may be communicatively connected to another external audio apparatus 1200. At this time, the audio apparatus 100 and the other audio apparatus 1200 can communicate by using the short-range wireless communication module. For example, in order to perform voice communication between firefighters when a disaster occurs, the audio device 100 may be attached to the helmet of each firefighter, and communication between firefighters may be performed through the audio device 100. In addition, in the case where a plurality of persons wear a helmet, such as in military operations or tunnel operations, voice communication can be performed using a plurality of audio devices.
In addition, the audio apparatus 100 may adjust an output in real time according to the type (or frequency band) of music currently played, thereby providing an equalizer function.
In addition, the audio device 100 may perform music play and pause functions when a preset event (e.g., user motion, momentary impact, etc.) is sensed on the helmet 50.
Further, the control method of the audio device according to the various embodiments described above may be implemented in a program and provided to the terminal device of the user. Specifically, a non-transitory computer readable medium (non-transitory computer readable medium) storing a program including a control method of the user terminal apparatus may be provided.
By non-transitory computer readable medium is meant a medium that stores data semi-permanently and that can be read by a device, rather than storing data for a very short period of time, such as registers, buffers, memory, etc. Specifically, the above-described various applications or programs may be provided by being stored on a non-transitory computer-readable medium such as a CD, a DVD, a hard disk, a blu-ray disc, a USB, a memory card, a ROM, or the like.
Further, although preferred embodiments of the present invention have been illustrated and described above, the present invention is not limited to the above-described specific embodiments, and it is apparent that those skilled in the art to which the present invention pertains can implement various modifications without departing from the gist of the present invention claimed in the claims, without separately understanding these modified embodiments from the technical idea or prospect of the present invention.

Claims (9)

1. An audio device, comprising:
a body, the body comprising:
a vibration element for generating vibration;
a vibration transmission unit that is disposed on a first face of the audio device facing a helmet and transmits vibration generated in the vibration element to the helmet; and
a first vibration shielding unit disposed around the vibration element and configured to prevent vibration generated in the vibration element from being transmitted to a second face of the audio device opposite to the first face,
a first attachment unit attached to a body of the audio device;
a second attachment unit attached to the helmet and connectable to the first attachment unit;
a sensing unit for sensing at least one of a thickness and a material of the helmet;
a pressure adjusting unit for adjusting pressure applied to the helmet; and
a processor configured to control the pressure adjusting unit to adjust the pressure applied to the helmet based on one of the thickness and the material of the helmet sensed by the sensing unit.
2. The audio device of claim 1, further comprising:
a suspension unit for supporting the vibration element.
3. The audio device of claim 1, further comprising:
and a second vibration shielding unit disposed at a side of the vibration transmission unit and preventing the vibration transmitted by the vibration unit from being transmitted to the side of the audio device.
4. The audio device of claim 1, further comprising:
a communication unit for performing communication with an external device,
wherein the processor is further configured to control the vibration element such that a vibration corresponding to the output audio is generated.
5. The audio device of claim 4, further comprising:
a microphone for receiving input of audio, the audio including user speech,
wherein the processor is configured to eliminate external noise other than the user speech in the audio received through the microphone.
6. The audio device of claim 4, further comprising:
a push-button is provided on the base,
wherein, when a user input is received through the button, the processor controls the communication unit such that a control signal corresponding to the user input is transmitted to the external device.
7. The audio device of claim 4, further comprising:
a plurality of push-buttons are arranged on the base,
wherein, when a user input is received through a first button of the plurality of buttons, the processor performs a first function corresponding to the first button, an
When user input is received through a second button of the plurality of buttons, the processor performs a second function corresponding to the second button.
8. The audio device of claim 4,
when an event is received from the external device through the communication unit, the processor controls the vibration element such that a vibration corresponding to the event is generated.
9. The audio device of claim 4, further comprising:
a sensor for sensing an external impact,
wherein when an impact above a preset value is sensed by the sensor, the processor controls the communication unit such that information corresponding to the sensed impact is transmitted to an external device.
CN201710256391.7A 2016-04-22 2017-04-19 Audio apparatus and control method thereof Active CN107306313B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2016-0049504 2016-04-22
KR20160049504 2016-04-22
KR10-2016-0125992 2016-09-30
KR1020160125992A KR101824925B1 (en) 2016-04-22 2016-09-30 Audio apparatus and Method for controlling the audio apparatus thereof

Publications (2)

Publication Number Publication Date
CN107306313A CN107306313A (en) 2017-10-31
CN107306313B true CN107306313B (en) 2020-10-27

Family

ID=60116083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710256391.7A Active CN107306313B (en) 2016-04-22 2017-04-19 Audio apparatus and control method thereof

Country Status (2)

Country Link
CN (1) CN107306313B (en)
WO (1) WO2017183816A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108683761B (en) * 2018-05-17 2020-01-14 Oppo广东移动通信有限公司 Sound production control method and device, electronic device and computer readable medium
CN112114036A (en) 2019-06-20 2020-12-22 霍尼韦尔国际公司 Method, apparatus and system for detecting internal defects in protective helmets

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170243B2 (en) * 2008-03-14 2012-05-01 Sony Corporation Audio output apparatus and vibrator

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002951326A0 (en) * 2002-09-11 2002-09-26 Innotech Pty Ltd Communication apparatus and helmet
JP2006042021A (en) * 2004-07-28 2006-02-09 Kotobueeru:Kk Speaker for helmet
JP4631070B2 (en) * 2005-05-23 2011-02-16 並木精密宝石株式会社 Bone conduction speaker
JP2008236637A (en) * 2007-03-23 2008-10-02 Kenwood Corp Excitation type acoustic generator and helmet with excitation typeacoustic generator
US9495952B2 (en) * 2011-08-08 2016-11-15 Qualcomm Incorporated Electronic devices for controlling noise
WO2013045976A1 (en) * 2011-09-28 2013-04-04 Sony Ericsson Mobile Communications Ab Controlling power for a headset
CN102638747B (en) * 2012-03-29 2015-03-25 苏州市思玛特电力科技有限公司 Helmet-type active anti-noise system
KR101301602B1 (en) * 2013-05-15 2013-08-29 주식회사 세일 Vibrating speaker
KR101670033B1 (en) * 2013-11-21 2016-10-27 백승환 System for Smart black box service using a smart blackbox Helmet
CN104095334A (en) * 2014-07-31 2014-10-15 国家电网公司 Intelligent safety helmet
CN204483186U (en) * 2015-03-18 2015-07-22 深圳前海零距物联网科技有限公司 There is the crash helmet of audio frequency input and output

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170243B2 (en) * 2008-03-14 2012-05-01 Sony Corporation Audio output apparatus and vibrator

Also Published As

Publication number Publication date
WO2017183816A1 (en) 2017-10-26
CN107306313A (en) 2017-10-31

Similar Documents

Publication Publication Date Title
US11051105B2 (en) Locating wireless devices
EP3547712B1 (en) Method for processing signals, terminal device, and non-transitory readable storage medium
US20200177982A1 (en) Communication Network of In-Ear Utility Devices Having Sensors
US10349176B1 (en) Method for processing signals, terminal device, and non-transitory computer-readable storage medium
JP6403397B2 (en) Application control method and device for terminal, earphone device and application control system
US9332377B2 (en) Device and method for control of data transfer in local area network
US10687142B2 (en) Method for input operation control and related products
CN105812974A (en) Ear set device
CN108600885B (en) Sound signal processing method and related product
WO2011152724A3 (en) Hearing system and method as well as ear-level device and control device applied therein
WO2016173318A1 (en) Intelligent safety headrest
KR20170033025A (en) Electronic device and method for controlling an operation thereof
CN107306313B (en) Audio apparatus and control method thereof
JP2008193497A (en) Portable communication terminal
CN111357006A (en) Fatigue prompting method and terminal
CN108810198A (en) Sounding control method, device, electronic device and computer-readable medium
KR101824925B1 (en) Audio apparatus and Method for controlling the audio apparatus thereof
CN108810787B (en) Foreign matter detection method and device based on audio equipment and terminal
KR101799392B1 (en) Audio apparatus and Method for controlling the audio apparatus thereof
US20170264993A1 (en) Head set and head set apparatus
CN107484082A (en) A kind of method and user terminal based on sound channel control audio signal transmission
CN110139181B (en) Audio processing method and device, earphone, terminal equipment and storage medium
CN106791113A (en) A kind of device for monitoring temperature and method
CN109144462A (en) Sounding control method, device, electronic device and computer-readable medium
CN108551648B (en) Quality detection method and device, readable storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant