WO2023274361A1 - 一种发声装置的控制方法、发声***以及车辆 - Google Patents
一种发声装置的控制方法、发声***以及车辆 Download PDFInfo
- Publication number
- WO2023274361A1 WO2023274361A1 PCT/CN2022/102818 CN2022102818W WO2023274361A1 WO 2023274361 A1 WO2023274361 A1 WO 2023274361A1 CN 2022102818 W CN2022102818 W CN 2022102818W WO 2023274361 A1 WO2023274361 A1 WO 2023274361A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- sound
- sound generating
- area
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000012545 processing Methods 0.000 claims description 34
- 230000015654 memory Effects 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 55
- 238000004891 communication Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 15
- 230000001276 controlling effect Effects 0.000 description 14
- 230000008447 perception Effects 0.000 description 10
- 230000000875 corresponding effect Effects 0.000 description 9
- 238000013507 mapping Methods 0.000 description 8
- 230000001953 sensory effect Effects 0.000 description 6
- 230000006399 behavior Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 230000010267 cellular communication Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000000446 fuel Substances 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 2
- ATUOYWHBWRKTHZ-UHFFFAOYSA-N Propane Chemical compound CCC ATUOYWHBWRKTHZ-UHFFFAOYSA-N 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000002485 combustion reaction Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000005669 field effect Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- SAZUGELZHZOXHB-UHFFFAOYSA-N acecarbromal Chemical compound CCC(Br)(CC)C(=O)NC(=O)NC(C)=O SAZUGELZHZOXHB-UHFFFAOYSA-N 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000010813 municipal solid waste Substances 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000003208 petroleum Substances 0.000 description 1
- 239000001294 propane Substances 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/023—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/13—Acoustic transducers and sound field adaptation in vehicles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/13—Aspects of volume control, not necessarily automatic, in stereophonic sound systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/80—Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
- Y02T10/84—Data processing systems or methods, management, administration
Definitions
- the embodiments of the present application relate to the field of smart cars, and more specifically, relate to a method for controlling a sounding device, a sounding system, and a vehicle.
- Embodiments of the present application provide a control method of a sound generating device, a sound generating system, and a vehicle, which can adaptively adjust the optimization center of the sound field by acquiring the location information of the area where the user is located, which helps to improve the user's listening experience.
- a method for controlling a sound emitting device includes: a first device acquires location information of multiple areas where multiple users are located; position information to control the work of the plurality of sounding devices.
- the first device obtains the location information of multiple areas where multiple users are located, and controls the work of multiple sound emitting devices according to the location information of multiple areas and the location information of multiple sound emitting devices, without the need for users to manually Adjusting the sound generating device helps to reduce the user's learning cost and reduce the user's cumbersome operations; at the same time, it also helps multiple users to enjoy a good listening effect, which helps to improve the user experience.
- the first device may be a vehicle, a sound system in a home theater, or a sound system in a KTV.
- the method before the first device acquires the location information of the area where the multiple users are located, the method further includes: the first device detects a first operation of the user.
- the first operation is an operation for the user to control the first device to play audio content, or the first operation is for the user to connect the second device to the first device and play the second device through the first device.
- the acquiring location information of multiple areas where multiple users are located by the first device includes: determining, by the first device, the multiple The location information of the area where the user is located.
- the sensing information may be one or more of image information, sound information, and pressure information.
- the image information can be collected by an image sensor, for example, a camera device, a radar, and the like.
- Acoustic information can be collected by acoustic sensors, for example, microphone arrays.
- the pressure information may be collected by a pressure sensor, for example, a pressure sensor installed in the seat.
- the above sensing information may be data collected by sensors, or may be information obtained based on data collected by sensors.
- the acquiring location information of multiple areas where multiple users are located by the first device includes: determining, by the first device, the multiple The location information of the area; or, the first device determines the location information of the multiple areas according to the data collected by the pressure sensor; or, the first device determines the location information of the multiple areas according to the data collected by the sound sensor.
- the multiple sound emitting devices are controlled through the position information of the multiple areas where the multiple users are located and the position information of the multiple sound generating devices, which can simplify the work of the first device when controlling the multiple sound generating devices to work. calculation process, so that the first device can more conveniently control multiple sound generating devices.
- the image sensor may include a camera, a laser radar, and the like.
- the image sensor can collect image information in the area and use the image information to determine whether the image information contains human face contour information, human ear information, iris information, etc. to determine whether the area contains User exists.
- the sound sensor may include a microphone array.
- the above-mentioned sensor can be one sensor, or multiple sensors, can be the same type of sensor, for example, they are all image sensors, and can use the sensing information of multiple types of sensors, for example, use image sensors and sound sensors
- the collected image information and sound information jointly determine the user's
- the location information of the multiple areas where the multiple users are located may include the center point of each of the multiple areas, or the preset point of each of the multiple areas, or at In each area, the points obtained according to the preset rules.
- the first device controls the work of the multiple sound emitting devices according to the location information of the multiple areas and the location information of the multiple sound emitting devices, including: the first A device determines the optimal center point of the sound field, and the distance between the optimal center point of the sound field and the center point of each area in the plurality of areas is equal; The distance between them controls the work of each sound generating device in the plurality of sound generating devices.
- the first device may first determine the current optimal center point of the sound field before controlling the operation of multiple sound generating devices, and control the operation of the sound generating devices according to the distance information between the optimal center point of the sound field and the multiple sound generating devices , so that multiple users can enjoy a good listening effect, and help improve user experience.
- the first device controls the multiple sound emitting devices to work according to the location information of the multiple regions and the location information of the multiple sound emitting devices, including: according to the location information and the mapping of the multiple regions
- the mapping relationship is the mapping relationship between the positions of the multiple areas and the playback intensities of the multiple sounding devices.
- the method further includes: the first device prompts position information of the sound field optimization center point.
- the first device prompting the position information of the sound field optimization center point includes: the first device prompting the position information of the sound field optimization center point through a human-machine interface HMI or a voice.
- the first device may be a vehicle
- the first device prompting the position information of the sound field optimization center point includes: the vehicle prompting the position information of the sound field optimization center point through an ambient light.
- the multiple areas are areas in the vehicle cabin.
- the multiple areas include a front row area and a rear row area.
- the multiple areas may include a main driving area and a passenger driving area.
- the multiple areas include the main driver's seat area, the passenger's seat area, the left area of the second row, and the right area of the second row.
- the first device may be a vehicle, and the first device acquires location information of multiple areas where multiple users are located, including: the vehicle acquires multiple location information through pressure sensors under seats in each area. Location information for multiple regions where the user is located.
- the first device includes a microphone array, and the first device acquires location information of multiple users, including: the first device acquires a voice signal in the environment through the microphone array; Location information for multiple areas where multiple users in the environment are located.
- the method further includes: the first device prompts location information of multiple areas where the multiple users are located.
- the first device prompting the location information of the multiple areas where the multiple users are located includes: the first device prompts the location information of the multiple areas where the multiple users are located through the human-computer interaction interface HMI or sound location information.
- the first device may be a vehicle, and the first device prompts the location information of the multiple areas where the multiple users are located, including: the vehicle prompts the multiple areas where the multiple users are located through ambient lights location information.
- controlling the operation of the multiple sound generating devices includes: adjusting the playback intensity of each of the multiple sound generating devices.
- the playback intensity of each sound generating device among the plurality of sound generating devices is proportional to the distance between each sound generating device and the user.
- the multiple sound generating devices include a first sound generating device, and the first device adjusts the playback intensity of each of the multiple sound generating devices, including: The first device controls the playback intensity of the first sound generating device to be the first playback intensity; wherein, the method further includes: the first device acquires that the user adjusts the playback intensity of the first sound generating device from the first playback intensity to An instruction of the second playback intensity; in response to acquiring the instruction, the first device adjusts the playback intensity of the first sound generating device to the second playback intensity.
- the playback intensity of the first sound generating device can be adjusted to the second playback intensity. In this way, the user can quickly complete the adjustment of the playback intensity of the first sound generating device, so that the first sound generating device can better meet the user's listening effect.
- a sounding system in a second aspect, includes a sensor, a controller and a plurality of sounding devices, wherein the sensor is used to collect data and send the data to the controller; the controller is used to According to the data, the position information of multiple areas where multiple users are located is obtained; according to the position information of the multiple areas and the position information of the multiple sound generating devices, the operation of the multiple sound generating devices is controlled.
- the controller is specifically configured to: acquire the location information of the multiple regions according to the data collected by the image sensor; or, obtain the location information of the multiple regions according to the data collected by the pressure sensor. The location information of multiple areas; or, according to the data collected by the sound sensor, the location information of the multiple areas is acquired.
- the controller is specifically configured to: determine an optimal center point of the sound field, where the distance from the optimal center point of the sound field to the center point of each of the multiple areas is equal; According to the distance between the sound field optimization center point and each of the plurality of sound generating devices, the operation of each of the plurality of sound generating devices is controlled.
- the controller is further configured to send a first instruction to the first prompting device, where the first instruction is used to instruct the first prompting device to prompt the sound field optimization center point location information.
- the multiple areas are areas in the vehicle cabin.
- the multiple areas include a front row area and a rear row area.
- the front row area includes a main driving area and a passenger driving area.
- the controller is further configured to send a second instruction to the second prompting device, where the second instruction is used to instruct the second prompting device to prompt the plurality of users The location information of multiple areas where you are located.
- the controller is specifically configured to: adjust the playback intensity of each of the multiple sound generating devices.
- the multiple sound emitting devices include a first sound generating device, and the controller is specifically configured to: control the playback intensity of the first sound generating device to be the first playback intensity; The controller is also used to obtain a third instruction that the user adjusts the playing intensity of the first sounding device from the first playing intensity to a second playing intensity; in response to obtaining the third instruction, the first sounding device The playback intensity of is adjusted to this second playback intensity.
- an electronic device in a third aspect, includes: a transceiver unit, configured to receive sensing information; a processing unit, configured to acquire location information of multiple areas where multiple users are located according to the sensing information; The processing unit is further configured to control the multiple sound emitting devices to work according to the position information of the multiple areas and the position information of the multiple sound generating devices.
- the processing unit is further configured to control the operation of the multiple sound emitting devices according to the location information of the multiple areas and the location information of the multiple sound emitting devices, including: the The processing unit is used to: determine the optimal center point of the sound field, the distance between the optimal center point of the sound field and the center point of each area in the plurality of areas is equal; The distance between them controls the work of each sounding device in the plurality of sounding devices.
- the transceiver unit is further configured to send a first instruction to the first prompt unit, and the first instruction is used to instruct the first prompt unit to prompt the sound field optimization center point location information.
- the multiple areas are areas in the vehicle cabin.
- the multiple areas include a front row area and a rear row area.
- the multiple areas include a main driving area and a co-driving area.
- the transceiver unit is further configured to send a second instruction to the second prompt unit, and the second instruction is used to instruct the second prompt unit to prompt the multiple users The location information of multiple areas where you are located.
- the processing unit is specifically configured to: adjust the playback intensity of each of the multiple sound generating devices.
- the multiple sound generating devices include a first sound generating device, and the processing unit is specifically configured to: control the playback intensity of the first sound generating device to be the first playback intensity;
- the transceiver unit is also used to receive a third instruction, which is an instruction indicating to adjust the playback intensity of the first sound generating device from the first playback intensity to a second playback intensity;
- the processing unit is also used to set the The playing intensity of the first sound generating device is adjusted to the second playing intensity.
- the sensory information includes one or more of image information, pressure information, and sound information.
- the electronic device may be a chip, a vehicle-mounted device (eg, a controller).
- the transceiver unit may be an interface circuit.
- the processing unit may be a processor, a processing device, and the like.
- an apparatus in a fourth aspect, includes a unit for performing the method in any implementation manner of the above-mentioned first aspect.
- a device in a fifth aspect, includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device performs any one of the possibilities in the first aspect Methods.
- the above-mentioned processing unit may be a processor
- the above-mentioned storage unit may be a memory
- the memory may be a storage unit (such as a register, a cache, etc.) in the chip, or a storage unit located outside the chip in the vehicle (such as , read-only memory, random-access memory, etc.).
- a system in a sixth aspect, includes a sensor and an electronic device, where the electronic device may be the electronic device described in any possible implementation manner of the third aspect.
- the system further includes multiple sound generating devices.
- a system in a seventh aspect, includes a plurality of sound emitting devices and an electronic device, where the electronic device may be the electronic device described in any possible implementation manner of the third aspect.
- the system further includes a sensor.
- a vehicle in an eighth aspect, there is provided a vehicle, the vehicle includes the sound generation system described in any possible implementation manner of the second aspect above, or the vehicle includes the sound generation system described in any possible implementation manner of the third aspect above. or, the vehicle includes the device described in any possible implementation of the fourth aspect, or, the vehicle includes the device described in any possible implementation of the fifth aspect, or, the vehicle It includes the system described in any possible implementation manner in the sixth aspect, or, the vehicle includes the system described in any possible implementation manner in the seventh aspect.
- a computer program product comprising: computer program code, when the computer program code is run on a computer, the computer is made to execute the method in the above first aspect.
- a computer-readable medium stores program codes, and when the computer program codes are run on a computer, the computer is made to execute the method of the above-mentioned first aspect.
- Fig. 1 is a schematic functional block diagram of a vehicle provided by an embodiment of the present application.
- Fig. 2 is a schematic structural diagram of a sound generation system provided by an embodiment of the present application.
- Fig. 3 is another structural schematic diagram of a sound generating system provided by an embodiment of the present application.
- Figure 4 is a top view of a vehicle.
- Fig. 5 is a schematic flow chart of a method for controlling a sound emitting device provided by an embodiment of the present application.
- Fig. 6 is a schematic diagram of the positions of four speakers in the vehicle cabin.
- Fig. 7 is a schematic diagram of the sound field optimization center of the speakers in the car when there are users in the driver's seat, the co-driver's seat, the left side of the second row, and the right side of the second row according to the embodiment of the present application.
- Fig. 8 is a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- Fig. 9 is a schematic diagram of the sound field optimization center of the speaker in the car when there are users in the driver's seat, the passenger's seat and the left area of the second row provided by the embodiment of the application.
- Fig. 10 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 11 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 12 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 13 is a schematic diagram of the sound field optimization center of the speakers in the car when there are users in the driver's seat and the left area of the second row provided by the embodiment of the present application.
- Fig. 14 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 15 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 16 is a schematic diagram of an in-vehicle sound field optimization center provided by an embodiment of the present application when there are users in the main driving seat and the co-driving seat.
- Fig. 17 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 18 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 19 is a schematic diagram of the interior sound field optimization center provided by the embodiment of the present application when there are users in the driver's seat and the second row on the left.
- Fig. 20 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 21 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 22 is a schematic diagram of an in-vehicle sound field optimization center provided by an embodiment of the present application when there is a user in the main driving seat.
- Fig. 23 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 24 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 25 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 26 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
- Fig. 27 is a set of graphical user interface GUI provided by the embodiment of the present application.
- Fig. 28 is another set of GUI provided by the embodiment of the present application.
- Fig. 29 is a schematic diagram of the method for controlling a sound generating device according to an embodiment of the present application when it is applied to a home theater.
- Fig. 30 is a schematic diagram of a sound field optimization center under a home theater provided by an embodiment of the present application.
- Fig. 31 is another schematic flowchart of the control method of the sound generating device provided by the embodiment of the present application.
- Fig. 32 is a schematic structural diagram of a sounding system provided by an embodiment of the present application.
- Fig. 33 is a schematic block diagram of a device provided by an embodiment of the present application.
- FIG. 1 is a schematic functional block diagram of a vehicle 100 provided by an embodiment of the present application.
- Vehicle 100 may be configured in a fully or partially autonomous driving mode.
- the vehicle 100 can obtain its surrounding environment information through the perception system 120, and obtain an automatic driving strategy based on the analysis of the surrounding environment information to realize fully automatic driving, or present the analysis results to the user to realize partially automatic driving.
- Vehicle 100 may include various subsystems such as infotainment system 110 , perception system 120 , decision control system 130 , drive system 140 , and computing platform 150 .
- vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components.
- each subsystem and component of the vehicle 100 may be interconnected in a wired or wireless manner.
- the infotainment system 110 may include a communication system 111 , an entertainment system 112 and a navigation system 113 .
- Communication system 111 may include a wireless communication system that may wirelessly communicate with one or more devices, either directly or via a communication network.
- wireless communication system 146 may use third generation (3 rd generation, 3G) cellular communications, such as code division multiple access (CDMA), evolution data optimized (EVDO), global mobile Communication system (global system for mobile communication, GSM), general packet radio service (general packet radio service, GPRS), or fourth generation ( 4th generation, 4G) cellular communication, such as long term evolution (long term evolution, LTE).
- 3G third generation
- 3G 3G cellular communications
- CDMA code division multiple access
- EVDO evolution data optimized
- GSM global mobile Communication system
- GSM global system for mobile communication
- GPRS general packet radio service
- 4th generation, 4G 4th generation
- the wireless communication system can use Wi-Fi to communicate with a wireless local area network (wireless local area network, WLAN).
- WLAN wireless local area network
- the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
- Other wireless protocols such as various vehicle communication systems, for example, a wireless communication system may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations Public and/or Private Data Communications.
- DSRC dedicated short range communications
- the entertainment system 112 can include a central control screen, a microphone and a sound system. Users can listen to the radio and play music in the car based on the entertainment system; Touch type, users can operate by touching the screen. In some cases, the user's voice signal can be acquired through the microphone, and the user can control the vehicle 100 based on the analysis of the user's voice signal, such as adjusting the temperature inside the vehicle. In other cases, music may be played to the user via a speaker.
- the navigation system 113 may include a map service provided by a map provider, so as to provide navigation for the driving route of the vehicle 100 , and the navigation system 113 may cooperate with the global positioning system 121 and the inertial measurement unit 122 of the vehicle.
- the map service provided by the map provider can be a two-dimensional map or a high-definition map.
- the perception system 120 may include several kinds of sensors that sense information about the environment around the vehicle 100 .
- the perception system 120 may include a global positioning system 121 (the global positioning system may be a GPS system, or a Beidou system or other positioning systems), an inertial measurement unit (inertial measurement unit, IMU) 122, a laser radar 123, a millimeter wave radar 124 , ultrasonic radar 125 and camera device 126 .
- the perception system 120 may also include sensors of the interior systems of the monitored vehicle 100 (eg, interior air quality monitors, fuel gauges, oil temperature gauges, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding properties (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function for safe operation of the vehicle 100 .
- the global positioning system 121 may be used to estimate the geographic location of the vehicle 100 .
- the inertial measurement unit 122 is used to sense the position and orientation changes of the vehicle 100 based on inertial acceleration.
- inertial measurement unit 122 may be a combination accelerometer and gyroscope.
- the lidar 123 may utilize laser light to sense objects in the environment in which the vehicle 100 is located.
- lidar 123 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
- the millimeter wave radar 124 may utilize radio signals to sense objects within the surrounding environment of the vehicle 100 .
- radar 126 may be used to sense the velocity and/or heading of objects.
- the ultrasonic radar 125 may sense objects around the vehicle 100 using ultrasonic signals.
- the camera device 126 can be used to capture image information of the surrounding environment of the vehicle 100 .
- the camera device 126 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the camera device 126 may include still images or video stream information.
- the decision-making control system 130 includes a computing system 131 for analyzing and making decisions based on the information acquired by the perception system 120.
- the decision-making control system 130 also includes a vehicle controller 132 for controlling the power system of the vehicle 100, and for controlling the steering of the vehicle 100.
- Computing system 131 is operable to process and analyze various information acquired by perception system 120 in order to identify objects, objects, and/or features in the environment surrounding vehicle 100 .
- the objects may include pedestrians or animals, and the objects and/or features may include traffic signals, road boundaries, and obstacles.
- the computing system 131 may use technologies such as object recognition algorithms, structure from motion (SFM) algorithms, and video tracking. In some embodiments, computing system 131 may be used to map the environment, track objects, estimate the velocity of objects, and the like.
- the computing system 131 can analyze various information obtained and obtain a control strategy for the vehicle.
- the vehicle controller 132 can be used for coordinated control of the power battery and the engine 141 of the vehicle, so as to improve the power performance of the vehicle 100 .
- the steering system 133 is operable to adjust the heading of the vehicle 100 .
- it could be a steering wheel system.
- the throttle 134 is used to control the operating speed of the engine 141 and thus the speed of the vehicle 100 .
- the braking system 135 is used to control deceleration of the vehicle 100 .
- Braking system 135 may use friction to slow wheels 144 .
- braking system 135 may convert kinetic energy of wheels 144 into electrical current.
- the braking system 135 may also take other forms to slow the wheels 144 to control the speed of the vehicle 100 .
- Drive system 140 may include components that provide powered motion to vehicle 100 .
- drive system 140 may include engine 141 , energy source 142 , transmission 143 and wheels 144 .
- the engine 141 may be an internal combustion engine, an electric motor, an air compression engine or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
- Engine 141 converts energy source 142 into mechanical energy.
- Examples of energy source 142 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power.
- the energy source 142 may also provide energy to other systems of the vehicle 100.
- Transmission 143 may transmit mechanical power from engine 141 to wheels 144 .
- Transmission 143 may include a gearbox, a differential, and a drive shaft.
- the transmission device 143 may also include other devices, such as clutches.
- drive shafts may include one or more axles that may be coupled to one or more wheels 121 .
- Computing platform 150 may include at least one processor 151 that may execute instructions 153 stored in a non-transitory computer-readable medium such as memory 152 .
- computing platform 150 may also be a plurality of computing devices that control individual components or subsystems of vehicle 100 in a distributed manner.
- Processor 151 may be any conventional processor, such as a commercially available CPU.
- the processor 151 may also include, for example, an image processor (graphic process unit, GPU), a field programmable gate array (field programmable gate array, FPGA), a system on chip (system on chip, SOC), an ASIC ( application specific integrated circuit, ASIC) or their combination.
- FIG. 1 functionally illustrates the processor, memory, and other elements of computer 110 in the same block, those of ordinary skill in the art will understand that the processor, computer, or memory may actually include Multiple processors, computers, or memories stored within the same physical enclosure.
- the memory may be a hard drive or other storage medium located in a different housing than the computer 110 .
- references to a processor or computer are to be understood to include references to collections of processors or computers or memories that may or may not operate in parallel.
- some components such as the steering and deceleration components, may each have their own processor that only performs calculations related to component-specific functions .
- the processor may be located remotely from the vehicle and be in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle while others are executed by a remote processor, including taking the necessary steps to perform a single maneuver.
- memory 152 may contain instructions 153 (eg, program logic) executable by processor 151 to perform various functions of vehicle 100 .
- Memory 152 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or controlling one or more of infotainment system 110 , perception system 120 , decision control system 130 , drive system 140 instructions.
- memory 152 may also store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by vehicle 100 and computing platform 150 during operation of vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
- Computing platform 150 may control functions of vehicle 100 based on input received from various subsystems (eg, drive system 140 , perception system 120 , and decision-making control system 130 ). For example, computing platform 150 may utilize input from decision control system 130 in order to control steering system 133 to avoid obstacles detected by perception system 120 . In some embodiments, computing platform 150 is operable to provide control over many aspects of vehicle 100 and its subsystems.
- various subsystems eg, drive system 140 , perception system 120 , and decision-making control system 130 .
- computing platform 150 may utilize input from decision control system 130 in order to control steering system 133 to avoid obstacles detected by perception system 120 .
- computing platform 150 is operable to provide control over many aspects of vehicle 100 and its subsystems.
- one or more of these components described above may be installed separately from or associated with the vehicle 100 .
- memory 152 may exist partially or completely separate from vehicle 100 .
- the components described above may be communicatively coupled together in a wired and/or wireless manner.
- FIG. 1 should not be construed as limiting the embodiment of the present application.
- An autonomous vehicle traveling on a road can identify objects within its surroundings to determine adjustments to its current speed.
- the objects may be other vehicles, traffic control devices, or other types of objects.
- each identified object may be considered independently and based on the object's respective characteristics, such as its current speed, acceleration, distance to the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
- the vehicle 100 or a sensing and computing device (e.g., computing system 131, computing platform 150) associated with the vehicle 100 may be based on the identified characteristics of the object and the state of the surrounding environment (e.g., traffic, rain, traffic on the road) ice, etc.) to predict the behavior of the identified objects.
- each identified object is dependent on the behavior of the other, so all identified objects can also be considered together to predict the behavior of a single identified object.
- the vehicle 100 is able to adjust its speed based on the predicted behavior of the identified object.
- the self-driving car is able to determine what steady state the vehicle will need to adjust to (eg, accelerate, decelerate, or stop) based on the predicted behavior of the object.
- other factors may also be considered to determine the speed of the vehicle 100 , such as the lateral position of the vehicle 100 in the traveling road, the curvature of the road, the proximity of static and dynamic objects, and the like.
- the computing device may also provide instructions to modify the steering angle of the vehicle 100 such that the self-driving car follows a given trajectory and/or maintains contact with objects in the vicinity of the self-driving car (e.g., , the safe lateral and longitudinal distances of cars in adjacent lanes on the road.
- objects in the vicinity of the self-driving car e.g., , the safe lateral and longitudinal distances of cars in adjacent lanes on the road.
- the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, etc., the embodiment of the present application There is no particular limitation.
- the embodiment of the present application provides a control method of a sound generating device, a sound generating system, and a vehicle. By identifying the location information of the area where the user is located, the sound field optimization center is automatically adjusted, so that each user can achieve a good listening effect.
- Fig. 2 shows a schematic structural diagram of a sound emitting system provided by an embodiment of the present application.
- the sounding system can be a controller area network (controller area network, CAN) control system, and the CAN control system can include multiple sensors (for example, sensor 1, sensor 2, etc.), multiple electronic control units (electronic control unit, ECU) ), car entertainment host, speaker controller and speaker.
- CAN controller area network
- ECU electronic control unit
- sensors include but are not limited to cameras, microphones, ultrasonic radars, millimeter-wave radars, lidars, vehicle speed sensors, motor power sensors, and engine speed sensors, etc.;
- the ECU is used to receive data collected by sensors and execute corresponding commands. After obtaining periodic signals or event signals, the ECU can send these signals to the public CAN network, where the ECU includes but is not limited to vehicle controllers, hybrid controllers, automatic transmission controllers, and automatic driving controllers;
- the entertainment host is used to capture the periodic signal or event signal sent by each ECU on the public CAN network, and executes the corresponding operation or forwards the signal to the speaker controller when the corresponding signal is recognized;
- the speaker controller is used to collect The command signal from the vehicle entertainment head unit adjusts the speaker.
- the vehicle entertainment host can capture the image information collected by the camera from the CAN bus.
- the car entertainment host can judge whether there are users in multiple areas in the car through the image information, and send the user's location information to the speaker controller.
- the speaker controller can control the playback intensity of each speaker according to the location information of the user.
- Fig. 3 shows another schematic structural diagram of an in-vehicle sound generation system provided by an embodiment of the present application.
- the sound system can be a ring network communication architecture, and all sensors and actuators (such as speakers, ambient lights, air conditioners, motors and other components that receive commands and execute commands) can be connected to the vehicle integration unit (VIU) nearby.
- VU vehicle integration unit
- VIU as a communication interface unit, can be deployed in densely populated locations of vehicle sensors and actuators, so that the sensors and actuators of the vehicle can be connected nearby; at the same time, VIU can have certain computing and driving capabilities (for example, VIU can absorb part of the execution The driving calculation function of the device); sensors include but are not limited to cameras, microphones, ultrasonic radars, millimeter wave radars, lidars, vehicle speed sensors, motor power sensors, and engine speed sensors.
- VIU communicates with each other through networking, intelligent driving computing platform/mobile data center (MDC), vehicle domain controller (vehicle domain controller, VDC) and smart cockpit domain controller (cockpit domain controller, CDC) Redundant access to the ring network communication network composed of VIUs respectively.
- MDC intelligent driving computing platform/mobile data center
- VDC vehicle domain controller
- VCDC vehicle domain controller
- CDC cockpit domain controller
- the sensor for example, a camera
- VIU can send the collected data to the VIU.
- VIU can publish data to the ring network, and MDC, VDC and CDC collect relevant data on the ring network, calculate and convert them into signals including user location information and publish them to the ring network. Control the playback intensity of the speaker through the corresponding computing power and driving power in VIU.
- VIU1 is used to drive speaker 1
- VIU2 is used to drive speaker 2
- VIU3 is used to drive speaker 3
- VIU4 is used to drive speaker 4.
- the layout of the VIU can be independent of the speaker, for example, VIU1 can be arranged at the left rear of the vehicle, and speaker 1 can be arranged near the door on the driver's side.
- Sensors or actuators can be connected to the nearest VIU to save wiring harnesses. Due to the limitation of the number of interfaces of MDC, VDC and CDC, the VIU can undertake the access of multiple sensors and multiple actuators, so as to perform the functions of interface and communication.
- VIU the sensor or controller is connected to and which controller is controlled by the sound system may be set at the factory, or may be defined by the user, and its hardware Replaceable and upgradeable.
- the VIU can absorb the driving calculation functions of some sensors and actuators, so that when some actuators (for example, CDC, VDC) fail, the VIU can directly process the data collected by the sensors, and then perform control.
- actuators for example, CDC, VDC
- the communication architecture shown in FIG. 3 may be an intelligent digital vehicle platform (intelligent digital vehicle platform, IDVP) ring network communication architecture.
- IDVP intelligent digital vehicle platform
- Figure 4 shows a top view of a vehicle.
- position 1 is the main driver's seat
- position 2 is the co-driver's position
- positions 3-5 are the rear area
- positions 6a-6d are where the four speakers in the car are located
- position 7 is where the camera in the car is located
- position 8 is where the CDC and the vehicle's central control panel are located.
- the speaker can be used to play the media sound in the car
- the camera in the car can be used to detect the position of the passengers in the car
- the car central control screen can be used to display image information and the interface of the application program
- CDC is used to connect various peripherals, It also provides data analysis and processing capabilities.
- FIG. 4 is only illustrated by taking the speakers at positions 6a-6d located near the main driver's door, near the passenger's door, near the left door of the second row, and near the right door of the second row as an example.
- the position of the loudspeaker is not specifically limited.
- the loudspeaker can also be located near the door of the vehicle, near the large central control screen, the ceiling, the floor, and the seat (for example, on the pillow of the seat).
- Fig. 5 shows a schematic flow chart of a method 500 for controlling a sound emitting device provided by an embodiment of the present application.
- the method 500 may be applied in a vehicle including a plurality of sound emitting devices (eg, speakers), the method 500 includes:
- the vehicle acquires the location information of the user.
- the vehicle can obtain the image information of various areas in the vehicle (such as the main driver's seat, the passenger seat, and the rear area) by activating the in-vehicle camera, and use the image information in each area to determine whether there are user.
- the vehicle can analyze the outline of a human face through the image information collected by the camera, so that the vehicle can determine whether there is a user in the area.
- the vehicle can analyze the iris information of the human eye contained in the image information collected by the camera, so that the vehicle can determine that there is a user in the area.
- the vehicle when the vehicle detects that the user turns on the sound field adaptive switch, the vehicle can start the camera to acquire image information of various areas in the vehicle.
- the user can select the setting option through the large central control screen and enter the sound effect function interface, and can choose to turn on the sound field adaptive switch in the sound effect function interface.
- the vehicle can also detect whether there is a user in the current area through the pressure sensor under the seat. Exemplarily, when the pressure value detected by the pressure sensor on the seat in a certain area is greater than or equal to a preset value, it may be determined that there is a user in the area.
- the vehicle can also determine sound source location information through the audio information acquired by the microphone array, so as to determine in which areas users exist.
- the acquisition of the position information of the user in the vehicle by the vehicle can also be achieved through a combination of one or more of the above-mentioned in-vehicle camera, pressure sensor, and microphone sub-array.
- data collected by sensors may be transmitted to the CDC, and the CDC may process the data to determine which areas there are users.
- CDC can convert it into a marked position. For example, when only the main driver's seat is occupied, the CDC can output 1000; when only the passenger's seat is occupied, the CDC can output 0100; CDC can output 0010 when there are people in the left side of the row; CDC can output 1100 when there are people in the main driver's seat and the passenger seat; and 1110 when there are people in the driver's seat, passenger seat and the second row on the left.
- the division of the areas in which the user sits in the car into the main driving seat, the co-driving seat, the second row on the left and the second row on the right is described as an example.
- the embodiment of the present application is not limited thereto.
- the area inside the car can also be divided into the main driver's seat, the passenger's seat, the second row left, the second row in the middle, and the second row's right; for another example, for a 7-seater SUV, the area inside the car can also be divided into the main driver's seat and the passenger's seat. 2nd row left, 2nd row right, 3rd row left and 3rd row right.
- the area inside the car may be divided into a front row area and a rear row area.
- the area in the car can be divided into a driving area, a passenger area, and so on.
- the vehicle adjusts the sound generating device according to the location information of the user.
- the sound generating device is a speaker as an example, and the description will be made in conjunction with the speakers 6a-6d in FIG. 2 .
- Figure 6 shows the location of the 4 speakers. Take the figure formed by the connection of the points where the four speakers are located is a rectangle ABCD as an example. On the rectangle ABCD, the speaker 1 is set at point A, the speaker 2 is set at point B, the speaker 3 is set at point C, and the speaker is set at point D 4. Point O is the center point of the rectangle ABCD (the distances from point O to points A, B, C, and D are equal). It should be understood that different cars have different positions and numbers of speakers. During specific implementation, a specific adjustment method can be designed according to the model of the car or the setting of the speakers in the car, which is not limited in this application.
- Fig. 7 shows a schematic diagram of the sound field optimization center of the speakers in the car when there are users in the driver's seat, the co-driver's seat, the left side of the second row, and the right side of the second row provided by the embodiment of the present application.
- the center point of each area can form a rectangle EFGH, and the center point of the rectangle EFGH can be point Q.
- the Q point can optimize the center point for the sound field in the current car.
- point Q may coincide with point O. Since the distance from the center point Q of the rectangle EFGH to the four speakers is equal, the vehicle can control the four speakers to play at the same intensity (for example, the four speakers all play at p).
- the vehicle can control the playback intensity of the four speakers according to the distance between the Q point and the four speakers.
- the vehicle can control the playback intensity of speaker 1 as
- the vehicle can control the playback intensity of speaker 2 to be
- the vehicle can control the playback intensity of the speaker 3 to be
- the vehicle can control the playing intensity of the loudspeaker 4 to be
- Fig. 8 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the large screen in the central control can prompt the user to "detect the driver's seat, the passenger seat, the left side of the second row, and the right side of the second row.”
- the current sound field optimization center point can be the center point of the area where the main driver's seat is located, the center point of the area where the passenger seat is located, the center point of the area where the second row is left, and the center point of the area where the second row is right. Points with equal distance from center point.
- Fig. 9 shows a schematic diagram of the sound field optimization center of the speaker in the car when there are users in the driver's seat, the passenger's seat and the left area of the second row provided by the embodiment of the present application.
- the center points of the area where the main driver's seat, the co-driver's seat, and the left side of the second row can form a triangle EFG, wherein the outer center of the triangle EFG can be point Q.
- the Q point can optimize the center point for the sound field in the current car.
- point Q may coincide with point O. Since the distance from the center point Q of the rectangle EFGH to the four speakers is equal, the vehicle can control the four speakers to play at the same intensity (for example, the four speakers all play at p).
- the Q point and the O point may also not coincide.
- the manner in which the vehicle controls the playback intensity of the four speakers can refer to the description in the above embodiment, and will not be repeated here.
- Fig. 10 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the large central control screen can prompt the user to "detect people in the main driving seat, the passenger seat, and the second row left", and at the same time remind the user
- the current sound field optimization center point may be a point that is equidistant from the center point of the area where the main driver's seat is located, the center point of the area where the passenger seat is located, and the center point of the area where the second row left is located.
- Fig. 11 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the large central control screen can prompt the user to "detect someone in the driver's seat, the passenger seat, and the right side of the second row", and at the same time remind the user
- the current sound field optimization center point may be a point that is equidistant from the center point of the area where the main driver's seat is located, the center point of the area where the passenger seat is located, and the center point of the area where the second row right is located.
- Fig. 12 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there are users in the main driving seat, the second row left and the second row right area in the car, it can prompt the user through the large central control screen to "detect someone in the main driving seat, second row left and second row right", and at the same time
- the user is prompted that the current center point of sound field optimization may be a point that is equidistant from the center point of the area where the main driving seat is located, the center point of the area where the second row left is located, and the center point of the area where the second row right is located.
- Fig. 13 shows a schematic diagram of the sound field optimization center of the speakers in the car when there are users in the driver's seat and the left area of the second row provided by the embodiment of the present application.
- the line connecting the center point of the driver's seat and the area on the left side of the second row is EG, and the midpoint of EG can be point Q.
- the Q point can optimize the center point for the sound field in the current car.
- point Q may coincide with point O. Since the distance between the midpoint of the line segment EH and the four speakers is equal, the vehicle can control the four speakers to play at the same intensity (for example, the four speakers all play at p).
- the vehicle can control the playback intensity of the four speakers according to the distance between the Q point and the four speakers.
- the vehicle can control the playback intensity of the four speakers according to the distance between the Q point and the four speakers.
- Fig. 14 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the large central control screen can prompt the user to "detect someone in the main driving seat and the right side of the second row", and at the same time remind the user of the current sound field optimization center.
- the point may be a point at the same distance from the center point of the area where the main driving seat is located and the center point of the area where the second row right is located.
- Fig. 15 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there are users in the passenger seat and the left area of the second row in the car, it can prompt the user through the large central control screen that "there is a person detected in the passenger seat and the second row left", and at the same time remind the user that the current sound field optimization center point can be A point at the same distance from the center point of the area where the passenger seat is located and the center point of the area where the second row left is located.
- Fig. 16 shows a schematic diagram of an in-vehicle sound field optimization center provided by an embodiment of the present application when there are users in the main driving seat and the co-driving seat.
- the line connecting the center points of the areas where the main driver's seat and the co-driver's seat are located is EF, where the midpoint of EF may be point P.
- Point P can optimize the center point for the current interior sound field.
- Fig. 17 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there are users in the main driving seat and the co-driving seat in the car, it can prompt the user through the large central control screen that "people have been detected in the main driving seat and the co-driving seat", and at the same time remind the user that the current sound field optimization center point can be a distance from the main driving seat.
- the point where the center point of the area where the driver's seat is located is equal to the center point of the area where the passenger seat is located.
- the vehicle can control the playback intensity of the four speakers according to the distance between the point P and the four speakers.
- the vehicle can control the playback intensity of speaker 1 as
- the vehicle can control the playback intensity of speaker 2 to be
- the vehicle can control the playback intensity of the speaker 3 to be
- the vehicle can control the playing intensity of the loudspeaker 4 to be
- Fig. 18 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there are users in the left area of the second row and the right area of the second row in the car, it can prompt the user through the large central control screen to "detect people in the second row left and second row right", and at the same time remind the user of the current sound field optimization
- the center point can be a point at the same distance from the center point of the area where the second row is left and the center point of the area where the second row is right.
- Fig. 19 shows a schematic diagram of the interior sound field optimization center provided by the embodiment of the present application when there are users in the driver's seat and the second row on the left.
- the line connecting the center point of the driver's seat and the area where the second row left is located is EH, where the midpoint of EH can be point R.
- the R point can optimize the center point for the current sound field in the car.
- Fig. 20 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there are users in the main driving seat and the left area of the second row in the car, it can prompt the user through the large central control screen that "someone has been detected in the main driving seat and the second row left", and at the same time remind the user of the current sound field optimization center point It may be a point at the same distance from the center point of the area where the main driving seat is located and the center point of the area where the second row left is located.
- the vehicle can control the playback intensity of the four speakers according to the distance between point R and the four speakers.
- the vehicle can control the playback intensity of speaker 1 as
- the vehicle can control the playback intensity of speaker 2 to be
- the vehicle can control the playback intensity of the speaker 3 to be
- the vehicle can control the playing intensity of the loudspeaker 4 to be
- Fig. 21 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the large central control screen can prompt the user to "detect someone in the passenger seat and the second row right", and at the same time remind the user that the current sound field optimization center point can be A point at the same distance from the center point of the area where the front passenger seat is located and the center point of the area where the second row right is located.
- Fig. 22 shows a schematic diagram of an in-vehicle sound field optimization center provided by an embodiment of the present application when there is a user in the main driving seat.
- the center point of the area where the main driving seat is located is point E, where point E can be the center point for optimizing the sound field in the current car.
- Fig. 23 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there is a user in the main driving seat in the car, it can prompt the user through the large central control screen to "detect someone in the main driving seat", and at the same time remind the user that the current sound field optimization center point can be the center point of the area where the main driving seat is located .
- the vehicle can control the playback intensity of the four speakers according to the distance between point E and the four speakers.
- the vehicle can control the playback intensity of speaker 1 as
- the vehicle can control the playback intensity of speaker 2 to be
- the vehicle can control the playback intensity of the speaker 3 to be
- the vehicle can control the playing intensity of the loudspeaker 4 to be
- Fig. 24 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there is a user in the passenger seat in the car, it can prompt the user through the large central control screen to "detect someone in the passenger seat", and at the same time remind the user that the current sound field optimization center point can be the center point of the area where the passenger seat is located.
- Fig. 25 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there is a user in the left area of the second row in the car, it can prompt the user through the large central control screen to "detect someone on the left of the second row", and at the same time remind the user that the current sound field optimization center point can be the area where the second row is located. center point.
- Fig. 26 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
- the vehicle detects that there is a user in the right area of the second row in the car, it can prompt the user through the large central control screen to "detect someone on the right of the second row", and at the same time remind the user that the current sound field optimization center point can be the area where the second row is located. center point.
- the above description is made by taking the location information of the user as the center point of the area where the user is located in S501 as an example, and this embodiment of the present application is not limited thereto.
- the user's location information may also be other preset points in the area where the user is located, or the user's location information may also be a certain point in the area calculated according to a preset rule (eg, a preset algorithm).
- the location information of the user may also be determined according to the location information of the user's ears.
- the image information collected by the camera device can determine the position information of the user's ear.
- the user's ear position information is the midpoint of the line connecting the first point and the second point, wherein the first point is a certain point on the user's left ear, and the second point is a certain point on the user's right ear.
- the location information of the area may be determined according to the location information of the user's human ear or the location information of the auricle of the human ear. 27 and 28, when the vehicle has adjusted the playback intensity of multiple sound generating devices through the user's location information, the process of the user manually adjusting the playback intensity of a certain speaker is described below.
- FIG. 27 shows a set of graphical user interface (graphical user interface, GUI) provided by the embodiment of the present application.
- the vehicle can prompt the user through the HMI to "detect the main driving seat, the auxiliary driving seat, There are people in the second row left and the second row right" and remind the user of the current sound field optimization center.
- the smiling faces on the driver's seat, the passenger seat, the second row left, and the second row upper right indicate that there are users in the area.
- an icon 2701 (for example, a trash can icon) can be displayed through the HMI.
- the vehicle detects that the user drags the smiling face in the left area of the second row to the icon 2701, the vehicle can display a GUI as shown in (b) in FIG. 27 through the HMI.
- the vehicle can prompt the user through the HMI that "the left area of the second row has been moved to the left area for you.” speaker volume drops to 0”.
- the playback intensity of the current four speakers can be p.
- the vehicle detects that the user drags the smiling face in the left area of the second row to the icon 2701, the vehicle can reduce the playback intensity of the speaker in the left area of the second row to 0, or reduce from p to 0.1p.
- the embodiment of the application does not limit this.
- Fig. 28 shows a set of GUI provided by the embodiment of the present application.
- the vehicle can prompt the user through the HMI to "detect the main driving seat, the auxiliary driving seat, There are people in the second row left and the second row right" and remind the user of the current sound field optimization center.
- a scroll bar 2801 of playback intensity may be displayed.
- the playback intensity scroll bar 2801 may include a scroll block 2802 .
- the vehicle in response to detecting that the user’s finger slides upward in the left area of the second row, the vehicle can increase the playback intensity of the speakers near the left area of the second row and display scrolling on the HMI.
- Block 2802 moves up.
- the playback intensity of the speakers near the left area of the second row may be increased from p to 1.5p.
- the vehicle can prompt the user through the HMI that "the speaker volume in the left area of the second row has been increased for you".
- the vehicle adjusts the playback intensity of the first sound generating device to the first playback intensity
- it detects that the user adjusts the playback intensity of a certain speaker from the first playback intensity to the second playback intensity it can The playback intensity of the speaker is adjusted to the second playback intensity.
- the user can quickly complete the adjustment of the playback intensity of the speakers in the area, so that the speakers in the area can better meet the user's listening effect.
- the vehicle can also determine the state of the user in a certain area through the image information collected by the camera, so as to adjust the playback intensity of the speakers near the area in combination with the location information of the area and the state of the user.
- the vehicle may control the playback intensity of the speakers near the left area of the second row to be 0 or other values.
- the second playback intensity may also be a default playback intensity (for example, the second playback intensity is 0).
- the vehicle detects the user's preset operation in a certain area (for example, the left area of the second row) on the large central control screen, the vehicle can adjust the playback intensity of the speakers in this area from the first playback intensity to the default playback intensity .
- the preset operation includes but is not limited to detection of a user's long-press operation in this area (for example, long-press on a seat in the left area of the second row), sliding or clicking operations in this area, and the like.
- Fig. 29 shows a schematic diagram when the method for controlling a sound generating device according to an embodiment of the present application is applied to a home theater.
- the home theater may include a speaker 1 , a speaker 2 and a speaker 3 .
- the sound system in the home theater can adjust the three speakers by detecting the positional relationship between the user and the three speakers.
- FIG. 30 shows a schematic diagram of a sound field optimization center under a home theater provided by an embodiment of the present application.
- the speaker 1 is set at point A on the triangle ABC
- the speaker 2 is set at point B
- the speaker 3 is set at point C, wherein , O is the circumcenter of the triangle ABC.
- the sound system can control the playback intensity of the speaker 1, the speaker 2, and the speaker 3 to be the same (for example, the playback intensities of the 3 speakers are the same). for p).
- the sound system can adjust the playback intensity of the three speakers according to the positional relationship between the user's area and the three speakers .
- the sound system can control the playback intensity of speaker 1 as
- the sound system can control the playback intensity of speaker 2 as
- the sound system can control the playback intensity of the speaker 3 to be
- Fig. 31 shows a schematic flowchart of a method 3100 for controlling a sound emitting device provided by an embodiment of the present application.
- the method 3100 may be applied in the first device.
- the method 3100 includes:
- the first device acquires location information of multiple areas where multiple users are located.
- the acquiring location information of multiple areas where multiple users are located by the first device includes: acquiring sensory information; determining the location information of the multiple areas according to the sensory information, where the sensory information includes image One or more of information, pressure information and sound information.
- the sensor information may include image information.
- the first device can acquire image information through an image sensor.
- the vehicle may determine whether the image information includes human face contour information, human ear information, or iris information through the image information collected by the camera device.
- the vehicle can obtain image information in the main driving area collected by the driver's camera. If the vehicle can determine that the image information includes one or more of human face contour information, human ear information or iris information, then the first device can determine that there is a user in the main driving area.
- the embodiment of the present application does not limit the implementation process of determining one or more of human face contour information, human ear information, or iris information included in the image information.
- a vehicle could feed image information into a neural network to classify the area as including the user's face.
- the vehicle can also establish a coordinate system for the main driving area.
- the vehicle can collect image information of multiple coordinate points in the coordinate system through the driver's camera, and then analyze whether there is human characteristic information at the multiple coordinate points. If there is human characteristic information, the vehicle can determine that there is a user in the main driving area.
- the first device is a vehicle, and the sensory information may be pressure information.
- the sensory information may be pressure information.
- a pressure sensor is included under each seat in the vehicle, and the first device acquires location information of multiple areas where multiple users are located, including: the pressure information (for example, pressure value) collected by the first device through the pressure sensor , to obtain location information of multiple regions where multiple users are located.
- the vehicle may determine that there is a user in the main driving area.
- the sensing information may be sound information.
- the acquisition by the first device of location information of multiple areas where multiple users are located includes: the first device acquires location information of multiple areas where the multiple users are located through sound signals collected by a microphone array. For example, the first device may locate the user based on the sound signals collected by the microphone array. If the first device determines that the user is located in a certain area according to the sound signal, the first device may determine that there is a user in the area.
- the first device may also combine at least two of image information, pressure information, and sound information to determine whether there is a user in the area.
- the first device For example, take the first device as a vehicle as an example.
- the image information collected by the main driving camera and the pressure information collected by the pressure sensor in the main driving seat can be obtained. If it is determined through the image information collected by the main driving camera that the image information includes face information and the pressure value collected by the pressure sensor in the main driving seat is greater than or equal to the first threshold, then the vehicle can determine that there is a user in the main driving area.
- the camera may acquire image information in the area and pick up sound information in the environment through a microphone array. If the image information of the area collected by the camera determines that the image information includes face information and the sound information collected by the microphone array determines that the sound comes from the area, then the vehicle can determine that there is a user in the area.
- the first device controls the multiple sound emitting devices to work according to the location information of the multiple areas where the multiple users are located and the location information of the multiple sound generating devices.
- the location information of the multiple areas where the multiple users are located may include the center point of each of the multiple areas, or the preset point of each of the multiple areas, or the , points obtained according to preset rules (eg, preset algorithms).
- preset rules eg, preset algorithms
- the first device controls the multiple sound emitting devices to work according to the location information of the multiple regions and the location information of the multiple sound emitting devices, including: controlling the multiple sound emitting devices according to the location information and the mapping relationship of the multiple regions Multiple sound generating devices work, and the mapping relationship is a mapping relationship between the positions of multiple areas and the playback intensities of the multiple sound generating devices.
- Table 1 shows the mapping relationship between the positions of multiple regions and the playback intensities of multiple sound generating devices.
- mapping relationship between the positions shown in Table 1 and the playback intensities of multiple sound emitting devices is only schematic, and the division method of the regions and the playback intensities of the speakers are not limited in this embodiment of the present application.
- the first device controls the operation of the multiple sound generating devices according to the location information of the multiple areas and the location information of the multiple sound generating devices, including: the first device determines a sound field optimization center point, and the sound field optimization center The distance from the point to the central point of each area in the plurality of areas is equal; the first device controls the distance between the sound field optimization center point and each sounding device in the plurality of sounding devices.
- Each sound generating device works.
- the method further includes: the first device prompting position information of the sound field optimization center point.
- the first device prompts the location information of the current sound field optimization center through the HMI, sound or ambient light.
- the plurality of areas are areas within the vehicle cabin.
- the multiple areas include a front row area and a rear row area.
- the multiple areas may include a main driving area and a co-driving area.
- the method further includes: the first device prompting location information of multiple areas where the multiple users are located.
- the first device controls the operation of the plurality of sounding devices, including:
- the first device adjusts the playback intensity of each sound generating device in the plurality of sound generating devices.
- the plurality of sound generating devices include a first sound generating device, and the first device adjusts the playing intensity of each sound generating device in the plurality of sound generating devices, including: the first device controls the playing of the first sound generating device The intensity is the first playback intensity; wherein, the method further includes: the first device acquires an instruction from the user to adjust the playback intensity of the first sound generating device from the first playback intensity to the second playback intensity; in response to acquiring the instruction instruction, the first device adjusts the playback intensity of the first sound generating device to the second playback intensity.
- the vehicle can control the playback intensity of the four speakers to be p .
- the vehicle detects that the user drags the smiling face in the left area of the second row to the icon 2701 on the HMI, the vehicle can adjust the playback intensity of the speakers near the left area of the second row from p to 0.
- the vehicle can control the playback intensity of the four speakers to be p .
- the vehicle detects that the user slides up the left area of the second row on the HMI, the vehicle can adjust the playback intensity of the speakers near the left area of the second row from p to 1.5p.
- the first device obtains the location information of multiple areas where multiple users are located, and controls the work of multiple sound emitting devices according to the location information of multiple areas and the location information of multiple sound emitting devices, without the need for users to manually Adjusting the sound generating device helps to reduce the user's learning cost and reduce the user's cumbersome operations; at the same time, it also helps multiple users to enjoy a good listening effect, which helps to improve the user experience.
- Fig. 32 shows a schematic structural diagram of a sound generating system 3200 provided by an embodiment of the present application.
- the sounding system may include a sensor 3201, a controller 3202, and a plurality of sounding devices 3203, wherein the sounding system includes a sensor, a controller, and a plurality of sounding devices, wherein,
- a sensor 3201 is used to collect data and send the data to the controller
- the controller 3202 is configured to obtain location information of multiple areas where multiple users are located according to the data; and control the multiple sound emitting devices 3203 to work according to the location information of the multiple areas and the location information of the multiple sound emitting devices.
- the data includes at least one of image information, pressure information and sound information.
- the controller 3202 is specifically configured to: determine the optimal center point of the sound field, the distance from the optimal center point of the sound field to the center point of each area in the plurality of areas is equal; The distance between each of the sound emitting devices controls the work of each of the plurality of sound generating devices.
- the controller 3202 is further configured to send a first instruction to the first prompting device, where the first instruction is used to instruct the first prompting device to prompt the location information of the sound field optimization center point.
- the plurality of areas are areas within the vehicle cabin.
- the multiple areas include a front row area and a rear row area.
- the multiple areas may include a main driving area and a co-driving area.
- controller 3202 is further configured to send a second instruction to the second prompting device, where the second instruction is used to instruct the second prompting device to prompt the location information of multiple areas where the multiple users are located.
- the controller 3202 is specifically configured to: adjust the playback intensity of each sound generating device in the plurality of sound generating devices 3203 .
- the multiple sound generating devices 3203 include a first sound generating device, and the controller 3202 is specifically configured to: control the playback intensity of the first sound generating device to be the first playback intensity; the controller 3202 is also configured to obtain the user's A third instruction for adjusting the play intensity of the first sound generating device from the first play intensity to a second play intensity; in response to acquiring the third instruction, adjusting the play intensity of the first sound generating device to the second play intensity.
- the device 3300 includes a transceiver unit 3301 and a processing unit 3302, wherein the transceiver unit 3301 is used to receive sensing information; the processing unit 3302 is used to , to obtain the location information of the multiple areas where the multiple users are located; the processing unit 3302 is further configured to control the work of the multiple sound emitting devices according to the location information of the multiple areas where the multiple users are located and the position information of the multiple sound emitting devices .
- the processing unit 3302 is further configured to control the operation of the plurality of sound emitting devices according to the position information of the multiple regions and the position information of the plurality of sound generating devices, including: the processing unit 3302 is configured to: determine the sound field optimization center point , the distance from the sound field optimization center point to the center point of each area in the plurality of areas is equal; according to the distance between the sound field optimization center point and each sound generation device in the plurality of sound generation devices, control the plurality of sound generation devices Each sound generating device in the work.
- the transceiving unit 3301 is further configured to send a first instruction to the first prompting unit, where the first instruction is used to instruct the first prompting unit to prompt the location information of the sound field optimization center point.
- the plurality of areas are areas within the vehicle cabin.
- the multiple areas include a front row area and a rear row area.
- the multiple areas include a main driving area and a co-driving area.
- the transceiving unit 3301 is further configured to send a second instruction to the second prompting unit, where the second instruction is used to instruct the second prompting unit to prompt the location information of multiple areas where the multiple users are located.
- the processing unit 3302 is specifically configured to: adjust the playback intensity of each sound-generating device among the plurality of sound-generating devices.
- the multiple sound generating devices include a first sound generating device
- the processing unit 3302 is specifically configured to: control the playback intensity of the first sound generating device to be the first playback intensity;
- the transceiver unit 3301 is also configured to receive a third instruction , the third instruction is an instruction indicating to adjust the playing intensity of the first sounding device from the first playing intensity to a second playing intensity;
- the processing unit 3302 is also configured to adjust the playing intensity of the first sounding device to The second playback intensity.
- the sensory information includes one or more of image information, pressure information and sound information.
- the embodiment of the present application also provides a device, which includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device executes the above-mentioned control method of the sound generating device.
- the above-mentioned processing unit may be the processor 151 shown in FIG. 1, and the above-mentioned storage unit may be the memory 152 shown in FIG. It may also be a storage unit (for example, read-only memory, random access memory, etc.) located outside the chip in the vehicle.
- the embodiment of the present application also provides a vehicle, including the above-mentioned sound generating system 3200 or the above-mentioned device 3300 .
- the embodiment of the present application also provides a computer program product, the computer program product including: computer program code, when the computer program code is run on the computer, the computer is made to execute the above method.
- the embodiment of the present application also provides a computer-readable medium, the computer-readable medium stores program codes, and when the computer program codes are run on a computer, the computer is made to execute the above method.
- each step of the above method may be completed by an integrated logic circuit of hardware in the processor 151 or instructions in the form of software.
- the methods disclosed in the embodiments of the present application can be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor 151.
- the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
- the storage medium is located in the memory, and the processor 151 reads the information in the memory 152, and completes the steps of the above method in combination with its hardware. To avoid repetition, no detailed description is given here.
- the processor 151 may be a central processing unit (central processing unit, CPU), and the processor 151 may also be other general-purpose processors, digital signal processors (digital signal processor, DSP), Application specific integrated circuit (ASIC), off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
- the memory 152 may include a read-only memory and a random access memory, and provide instructions and data to the processor.
- sequence numbers of the above-mentioned processes do not mean the order of execution, and the execution order of the processes should be determined by their functions and internal logic, and should not be used in the embodiments of the present application.
- the implementation process constitutes any limitation.
- the disclosed systems, devices and methods may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the functions are realized in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Mechanical Engineering (AREA)
- Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims (25)
- 一种发声装置的控制方法,其特征在于,包括:获取多个用户所在的多个区域的位置信息;根据所述多个区域的位置信息与多个发声装置的位置信息,控制所述多个发声装置工作。
- 根据权利要求1所述的方法,其特征在于,所述根据所述多个区域的位置信息与多个发声装置的位置信息,控制所述多个发声装置工作,包括:确定声场优化中心点,所述声场优化中心点到所述多个区域中每个区域的中心点的距离相等;根据所述声场优化中心点与所述多个发声装置中每个发声装置之间的距离,控制所述多个发声装置中每个发声装置工作。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:提示所述声场优化中心点的位置信息。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述多个区域为车辆座舱内的区域。
- 根据权利要求4所述的方法,其特征在于,所述多个区域包括前排区域和后排区域。
- 根据权利要求4所述的方法,其特征在于,所述多个区域包括主驾区域和副驾区域。
- 根据权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:提示所述多个用户所在的多个区域的位置信息。
- 根据权利要求1至7中任一项所述方法,其特征在于,所述控制所述多个发声装置工作,包括:对所述多个发声装置中每个发声装置的播放强度进行调节。
- 根据权利要求8所述的方法,其特征在于,所述多个发声装置包括第一发声装置,所述对所述多个发声装置中每个发声装置的播放强度进行调节,包括:控制所述第一发声装置的播放强度为第一播放强度;其中,所述方法还包括:获取到用户将所述第一发声装置的播放强度从所述第一播放强度调节至第二播放强度的指令;响应于获取到所述指令,将所述第一发声装置的播放强度调节为所述第二播放强度。
- 根据权利要求1至9任一所述的方法,其特征在于,所述获取多个用户所在的多个区域的位置信息,包括:获取传感信息;根据所述传感信息,确定所述多个区域的位置信息;其中,所述传感信息包括图像信息、压力信息和声音信息中的一个或多个。
- 一种电子装置,其特征在于,包括:收发单元,用于接收传感信息;处理单元,用于根据所述传感信息,获取多个用户所在的多个区域的位置信息;处理单元,还用于根据所述多个区域的位置信息与多个发声装置的位置信息,控制所述多个发声装置工作。
- 根据权利要求11所述的电子装置,其特征在于,处理单元,还用于根据所述多个区域的位置信息与多个发声装置的位置信息,控制所述多个发声装置工作,包括:所述处理单元用于:确定声场优化中心点,所述声场优化中心点到所述多个区域中每个区域的中心点的距离相等;根据所述声场优化中心点与所述多个发声装置中每个发声装置之间的距离,控制所述多个发声装置中每个发声装置工作。
- 根据权利要求12所述的电子装置,其特征在于,所述收发单元,还用于向第一提示单元发送第一指令,所述第一指令用于指示所述第一提示单元提示所述声场优化中心点的位置信息。
- 根据权利要求11至13中任一项所述的电子装置,其特征在于,所述多个区域为车辆座舱内的区域。
- 根据权利要求14所述的电子装置,其特征在于,所述多个区域包括前排区域和后排区域。
- 根据权利要求14所述的电子装置,其特征在于,所述多个区域包括主驾区域和副驾区域。
- 根据权利要求11至16中任一项所述的电子装置,其特征在于,所述收发单元,还用于向第二提示单元发送第二指令,所述第二指令用于指示所述第二提示单元提示所述多个用户所在的多个区域的位置信息。
- 根据权利要求11至17中任一项所述的电子装置,其特征在于,所述处理单元具体用于:对所述多个发声装置中每个发声装置的播放强度进行调节。
- 根据权利要求18所述的电子装置,其特征在于,所述多个发声装置包括第一发声装置,所述处理单元具体用于:控制所述第一发声装置的播放强度为第一播放强度;所述收发单元,还用于接收第三指令,所述第三指令为指示将所述第一发声装置的播放强度从所述第一播放强度调节至第二播放强度的指令;所述处理单元,还用于将所述第一发声装置的播放强度调节为所述第二播放强度。
- 根据权利要求11至19中任一项所述的电子装置,其特征在于,所述传感信息包括图像信息、压力信息和声音信息中的一个或多个。
- 一种电子装置,其特征在于,包括:存储器,用于存储指令;处理器,用于读取所述指令,以执行如权利要求1至10中任意一项所述的方法。
- 一种***,其特征在于,包括传感器和电子装置,其中,所述电子装置为权利要求11至21中任一项所述的电子装置。
- 根据权利要求22所述的***,其特征在于,所述***还包括多个发声装置。
- 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储有程序代码, 当所述程序代码在计算机上运行时,使得计算机执行如权利要求1至10中任意一项所述的方法。
- 一种车辆,其特征在于,所述车辆包括如权利要求11至21中任一项所述的电子装置,或者,所述车辆包括如权利要求22或23所述的***。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22832173.3A EP4344255A1 (en) | 2021-06-30 | 2022-06-30 | Method for controlling sound production apparatuses, and sound production system and vehicle |
US18/400,108 US20240236599A9 (en) | 2021-06-30 | 2023-12-29 | Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110744208.4 | 2021-06-29 | ||
CN202110744208.4A CN113596705B (zh) | 2021-06-30 | 2021-06-30 | 一种发声装置的控制方法、发声***以及车辆 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/400,108 Continuation US20240236599A9 (en) | 2021-06-30 | 2023-12-29 | Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023274361A1 true WO2023274361A1 (zh) | 2023-01-05 |
Family
ID=78245719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/102818 WO2023274361A1 (zh) | 2021-06-30 | 2022-06-30 | 一种发声装置的控制方法、发声***以及车辆 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4344255A1 (zh) |
CN (1) | CN113596705B (zh) |
WO (1) | WO2023274361A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113596705B (zh) * | 2021-06-30 | 2023-05-16 | 华为技术有限公司 | 一种发声装置的控制方法、发声***以及车辆 |
CN114038240B (zh) * | 2021-11-30 | 2023-05-05 | 东风商用车有限公司 | 一种商用车声场控制方法、装置及设备 |
CN117985035A (zh) * | 2022-10-28 | 2024-05-07 | 华为技术有限公司 | 一种控制方法、装置和运载工具 |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103220597A (zh) * | 2013-03-29 | 2013-07-24 | 苏州上声电子有限公司 | 车内声场均衡装置 |
CN103220594A (zh) * | 2012-01-20 | 2013-07-24 | 新昌有限公司 | 用于车辆的音效调控*** |
CN104270695A (zh) * | 2014-09-01 | 2015-01-07 | 歌尔声学股份有限公司 | 一种自动调整车内声场分布的方法和*** |
CN107592588A (zh) * | 2017-07-18 | 2018-01-16 | 科大讯飞股份有限公司 | 声场调节方法及装置、存储介质、电子设备 |
CN108551623A (zh) * | 2018-05-15 | 2018-09-18 | 上海博泰悦臻网络技术服务有限公司 | 车辆及其基于声音识别的音频播放调节方法 |
CN108834030A (zh) * | 2018-09-28 | 2018-11-16 | 广州小鹏汽车科技有限公司 | 一种车内声场调节方法及音频*** |
US20190141465A1 (en) * | 2016-04-29 | 2019-05-09 | Sqand Co. Ltd. | System for correcting sound space inside vehicle |
WO2019112087A1 (ko) * | 2017-12-06 | 2019-06-13 | 주식회사 피티지 | 차량용 지향성 음향시스템 |
CN109922411A (zh) * | 2019-01-29 | 2019-06-21 | 惠州市华智航科技有限公司 | 声场控制方法及声场控制*** |
CN110149586A (zh) * | 2019-05-23 | 2019-08-20 | 贵安新区新特电动汽车工业有限公司 | 声音调整方法及装置 |
CN112312280A (zh) * | 2019-07-31 | 2021-02-02 | 北京地平线机器人技术研发有限公司 | 一种车内声音播放方法及装置 |
US20210152939A1 (en) * | 2019-11-19 | 2021-05-20 | Analog Devices, Inc. | Audio system speaker virtualization |
CN113596705A (zh) * | 2021-06-30 | 2021-11-02 | 华为技术有限公司 | 一种发声装置的控制方法、发声***以及车辆 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9510126B2 (en) * | 2012-01-11 | 2016-11-29 | Sony Corporation | Sound field control device, sound field control method, program, sound control system and server |
CN204316717U (zh) * | 2014-09-01 | 2015-05-06 | 歌尔声学股份有限公司 | 一种自动调整车内声场分布的*** |
US9509820B2 (en) * | 2014-12-03 | 2016-11-29 | Harman International Industries, Incorporated | Methods and systems for controlling in-vehicle speakers |
DK179663B1 (en) * | 2015-10-27 | 2019-03-13 | Bang & Olufsen A/S | Loudspeaker with controlled sound fields |
CN113055810A (zh) * | 2021-03-05 | 2021-06-29 | 广州小鹏汽车科技有限公司 | 音效控制方法、装置、***、车辆以及存储介质 |
-
2021
- 2021-06-30 CN CN202110744208.4A patent/CN113596705B/zh active Active
-
2022
- 2022-06-30 WO PCT/CN2022/102818 patent/WO2023274361A1/zh active Application Filing
- 2022-06-30 EP EP22832173.3A patent/EP4344255A1/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103220594A (zh) * | 2012-01-20 | 2013-07-24 | 新昌有限公司 | 用于车辆的音效调控*** |
CN103220597A (zh) * | 2013-03-29 | 2013-07-24 | 苏州上声电子有限公司 | 车内声场均衡装置 |
CN104270695A (zh) * | 2014-09-01 | 2015-01-07 | 歌尔声学股份有限公司 | 一种自动调整车内声场分布的方法和*** |
US20190141465A1 (en) * | 2016-04-29 | 2019-05-09 | Sqand Co. Ltd. | System for correcting sound space inside vehicle |
CN107592588A (zh) * | 2017-07-18 | 2018-01-16 | 科大讯飞股份有限公司 | 声场调节方法及装置、存储介质、电子设备 |
WO2019112087A1 (ko) * | 2017-12-06 | 2019-06-13 | 주식회사 피티지 | 차량용 지향성 음향시스템 |
CN108551623A (zh) * | 2018-05-15 | 2018-09-18 | 上海博泰悦臻网络技术服务有限公司 | 车辆及其基于声音识别的音频播放调节方法 |
CN108834030A (zh) * | 2018-09-28 | 2018-11-16 | 广州小鹏汽车科技有限公司 | 一种车内声场调节方法及音频*** |
CN109922411A (zh) * | 2019-01-29 | 2019-06-21 | 惠州市华智航科技有限公司 | 声场控制方法及声场控制*** |
CN110149586A (zh) * | 2019-05-23 | 2019-08-20 | 贵安新区新特电动汽车工业有限公司 | 声音调整方法及装置 |
CN112312280A (zh) * | 2019-07-31 | 2021-02-02 | 北京地平线机器人技术研发有限公司 | 一种车内声音播放方法及装置 |
US20210152939A1 (en) * | 2019-11-19 | 2021-05-20 | Analog Devices, Inc. | Audio system speaker virtualization |
CN113596705A (zh) * | 2021-06-30 | 2021-11-02 | 华为技术有限公司 | 一种发声装置的控制方法、发声***以及车辆 |
Also Published As
Publication number | Publication date |
---|---|
EP4344255A1 (en) | 2024-03-27 |
CN113596705A (zh) | 2021-11-02 |
CN113596705B (zh) | 2023-05-16 |
US20240137721A1 (en) | 2024-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210280055A1 (en) | Feedback performance control and tracking | |
WO2021052213A1 (zh) | 调整油门踏板特性的方法和装置 | |
WO2023274361A1 (zh) | 一种发声装置的控制方法、发声***以及车辆 | |
US10286905B2 (en) | Driver assistance apparatus and control method for the same | |
WO2022000448A1 (zh) | 车内隔空手势的交互方法、电子装置及*** | |
WO2022205243A1 (zh) | 一种变道区域获取方法以及装置 | |
WO2020031812A1 (ja) | 情報処理装置、情報処理方法、情報処理プログラム、及び移動体 | |
EP3892960A1 (en) | Systems and methods for augmented reality in a vehicle | |
CN115042821B (zh) | 车辆控制方法、装置、车辆及存储介质 | |
WO2021217575A1 (zh) | 用户感兴趣对象的识别方法以及识别装置 | |
WO2024093768A1 (zh) | 一种车辆告警方法以及相关设备 | |
CN115056784B (zh) | 车辆控制方法、装置、车辆、存储介质及芯片 | |
CN114828131B (zh) | 通讯方法、介质、车载通讯***、芯片及车辆 | |
CN115170630A (zh) | 地图生成方法、装置、电子设备、车辆和存储介质 | |
US20240236599A9 (en) | Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle | |
CN114572219B (zh) | 自动超车方法、装置、车辆、存储介质及芯片 | |
CN114771514B (zh) | 车辆行驶控制方法、装置、设备、介质、芯片及车辆 | |
CN114802435B (zh) | 车辆控制方法、装置、车辆、存储介质及芯片 | |
CN115297434B (zh) | 服务调用方法、装置、车辆、可读存储介质及芯片 | |
WO2023050058A1 (zh) | 控制车载摄像头的视角的方法、装置以及车辆 | |
CN115535004B (zh) | 距离生成方法、装置、存储介质及车辆 | |
CN115063639B (zh) | 生成模型的方法、图像语义分割方法、装置、车辆及介质 | |
WO2024131698A1 (zh) | 一种车辆中座椅的调整方法、泊车方法以及相关设备 | |
CN114802217B (zh) | 确定泊车模式的方法、装置、存储介质及车辆 | |
WO2023106235A1 (ja) | 情報処理装置、情報処理方法、および車両制御システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22832173 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2023/015457 Country of ref document: MX |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022832173 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2023579623 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022832173 Country of ref document: EP Effective date: 20231221 |