WO2023274361A1 - 一种发声装置的控制方法、发声***以及车辆 - Google Patents

一种发声装置的控制方法、发声***以及车辆 Download PDF

Info

Publication number
WO2023274361A1
WO2023274361A1 PCT/CN2022/102818 CN2022102818W WO2023274361A1 WO 2023274361 A1 WO2023274361 A1 WO 2023274361A1 CN 2022102818 W CN2022102818 W CN 2022102818W WO 2023274361 A1 WO2023274361 A1 WO 2023274361A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
sound
sound generating
area
information
Prior art date
Application number
PCT/CN2022/102818
Other languages
English (en)
French (fr)
Inventor
黄天宇
董春鹤
谢尚威
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22832173.3A priority Critical patent/EP4344255A1/en
Publication of WO2023274361A1 publication Critical patent/WO2023274361A1/zh
Priority to US18/400,108 priority patent/US20240236599A9/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/80Technologies aiming to reduce greenhouse gasses emissions common to all road transportation technologies
    • Y02T10/84Data processing systems or methods, management, administration

Definitions

  • the embodiments of the present application relate to the field of smart cars, and more specifically, relate to a method for controlling a sounding device, a sounding system, and a vehicle.
  • Embodiments of the present application provide a control method of a sound generating device, a sound generating system, and a vehicle, which can adaptively adjust the optimization center of the sound field by acquiring the location information of the area where the user is located, which helps to improve the user's listening experience.
  • a method for controlling a sound emitting device includes: a first device acquires location information of multiple areas where multiple users are located; position information to control the work of the plurality of sounding devices.
  • the first device obtains the location information of multiple areas where multiple users are located, and controls the work of multiple sound emitting devices according to the location information of multiple areas and the location information of multiple sound emitting devices, without the need for users to manually Adjusting the sound generating device helps to reduce the user's learning cost and reduce the user's cumbersome operations; at the same time, it also helps multiple users to enjoy a good listening effect, which helps to improve the user experience.
  • the first device may be a vehicle, a sound system in a home theater, or a sound system in a KTV.
  • the method before the first device acquires the location information of the area where the multiple users are located, the method further includes: the first device detects a first operation of the user.
  • the first operation is an operation for the user to control the first device to play audio content, or the first operation is for the user to connect the second device to the first device and play the second device through the first device.
  • the acquiring location information of multiple areas where multiple users are located by the first device includes: determining, by the first device, the multiple The location information of the area where the user is located.
  • the sensing information may be one or more of image information, sound information, and pressure information.
  • the image information can be collected by an image sensor, for example, a camera device, a radar, and the like.
  • Acoustic information can be collected by acoustic sensors, for example, microphone arrays.
  • the pressure information may be collected by a pressure sensor, for example, a pressure sensor installed in the seat.
  • the above sensing information may be data collected by sensors, or may be information obtained based on data collected by sensors.
  • the acquiring location information of multiple areas where multiple users are located by the first device includes: determining, by the first device, the multiple The location information of the area; or, the first device determines the location information of the multiple areas according to the data collected by the pressure sensor; or, the first device determines the location information of the multiple areas according to the data collected by the sound sensor.
  • the multiple sound emitting devices are controlled through the position information of the multiple areas where the multiple users are located and the position information of the multiple sound generating devices, which can simplify the work of the first device when controlling the multiple sound generating devices to work. calculation process, so that the first device can more conveniently control multiple sound generating devices.
  • the image sensor may include a camera, a laser radar, and the like.
  • the image sensor can collect image information in the area and use the image information to determine whether the image information contains human face contour information, human ear information, iris information, etc. to determine whether the area contains User exists.
  • the sound sensor may include a microphone array.
  • the above-mentioned sensor can be one sensor, or multiple sensors, can be the same type of sensor, for example, they are all image sensors, and can use the sensing information of multiple types of sensors, for example, use image sensors and sound sensors
  • the collected image information and sound information jointly determine the user's
  • the location information of the multiple areas where the multiple users are located may include the center point of each of the multiple areas, or the preset point of each of the multiple areas, or at In each area, the points obtained according to the preset rules.
  • the first device controls the work of the multiple sound emitting devices according to the location information of the multiple areas and the location information of the multiple sound emitting devices, including: the first A device determines the optimal center point of the sound field, and the distance between the optimal center point of the sound field and the center point of each area in the plurality of areas is equal; The distance between them controls the work of each sound generating device in the plurality of sound generating devices.
  • the first device may first determine the current optimal center point of the sound field before controlling the operation of multiple sound generating devices, and control the operation of the sound generating devices according to the distance information between the optimal center point of the sound field and the multiple sound generating devices , so that multiple users can enjoy a good listening effect, and help improve user experience.
  • the first device controls the multiple sound emitting devices to work according to the location information of the multiple regions and the location information of the multiple sound emitting devices, including: according to the location information and the mapping of the multiple regions
  • the mapping relationship is the mapping relationship between the positions of the multiple areas and the playback intensities of the multiple sounding devices.
  • the method further includes: the first device prompts position information of the sound field optimization center point.
  • the first device prompting the position information of the sound field optimization center point includes: the first device prompting the position information of the sound field optimization center point through a human-machine interface HMI or a voice.
  • the first device may be a vehicle
  • the first device prompting the position information of the sound field optimization center point includes: the vehicle prompting the position information of the sound field optimization center point through an ambient light.
  • the multiple areas are areas in the vehicle cabin.
  • the multiple areas include a front row area and a rear row area.
  • the multiple areas may include a main driving area and a passenger driving area.
  • the multiple areas include the main driver's seat area, the passenger's seat area, the left area of the second row, and the right area of the second row.
  • the first device may be a vehicle, and the first device acquires location information of multiple areas where multiple users are located, including: the vehicle acquires multiple location information through pressure sensors under seats in each area. Location information for multiple regions where the user is located.
  • the first device includes a microphone array, and the first device acquires location information of multiple users, including: the first device acquires a voice signal in the environment through the microphone array; Location information for multiple areas where multiple users in the environment are located.
  • the method further includes: the first device prompts location information of multiple areas where the multiple users are located.
  • the first device prompting the location information of the multiple areas where the multiple users are located includes: the first device prompts the location information of the multiple areas where the multiple users are located through the human-computer interaction interface HMI or sound location information.
  • the first device may be a vehicle, and the first device prompts the location information of the multiple areas where the multiple users are located, including: the vehicle prompts the multiple areas where the multiple users are located through ambient lights location information.
  • controlling the operation of the multiple sound generating devices includes: adjusting the playback intensity of each of the multiple sound generating devices.
  • the playback intensity of each sound generating device among the plurality of sound generating devices is proportional to the distance between each sound generating device and the user.
  • the multiple sound generating devices include a first sound generating device, and the first device adjusts the playback intensity of each of the multiple sound generating devices, including: The first device controls the playback intensity of the first sound generating device to be the first playback intensity; wherein, the method further includes: the first device acquires that the user adjusts the playback intensity of the first sound generating device from the first playback intensity to An instruction of the second playback intensity; in response to acquiring the instruction, the first device adjusts the playback intensity of the first sound generating device to the second playback intensity.
  • the playback intensity of the first sound generating device can be adjusted to the second playback intensity. In this way, the user can quickly complete the adjustment of the playback intensity of the first sound generating device, so that the first sound generating device can better meet the user's listening effect.
  • a sounding system in a second aspect, includes a sensor, a controller and a plurality of sounding devices, wherein the sensor is used to collect data and send the data to the controller; the controller is used to According to the data, the position information of multiple areas where multiple users are located is obtained; according to the position information of the multiple areas and the position information of the multiple sound generating devices, the operation of the multiple sound generating devices is controlled.
  • the controller is specifically configured to: acquire the location information of the multiple regions according to the data collected by the image sensor; or, obtain the location information of the multiple regions according to the data collected by the pressure sensor. The location information of multiple areas; or, according to the data collected by the sound sensor, the location information of the multiple areas is acquired.
  • the controller is specifically configured to: determine an optimal center point of the sound field, where the distance from the optimal center point of the sound field to the center point of each of the multiple areas is equal; According to the distance between the sound field optimization center point and each of the plurality of sound generating devices, the operation of each of the plurality of sound generating devices is controlled.
  • the controller is further configured to send a first instruction to the first prompting device, where the first instruction is used to instruct the first prompting device to prompt the sound field optimization center point location information.
  • the multiple areas are areas in the vehicle cabin.
  • the multiple areas include a front row area and a rear row area.
  • the front row area includes a main driving area and a passenger driving area.
  • the controller is further configured to send a second instruction to the second prompting device, where the second instruction is used to instruct the second prompting device to prompt the plurality of users The location information of multiple areas where you are located.
  • the controller is specifically configured to: adjust the playback intensity of each of the multiple sound generating devices.
  • the multiple sound emitting devices include a first sound generating device, and the controller is specifically configured to: control the playback intensity of the first sound generating device to be the first playback intensity; The controller is also used to obtain a third instruction that the user adjusts the playing intensity of the first sounding device from the first playing intensity to a second playing intensity; in response to obtaining the third instruction, the first sounding device The playback intensity of is adjusted to this second playback intensity.
  • an electronic device in a third aspect, includes: a transceiver unit, configured to receive sensing information; a processing unit, configured to acquire location information of multiple areas where multiple users are located according to the sensing information; The processing unit is further configured to control the multiple sound emitting devices to work according to the position information of the multiple areas and the position information of the multiple sound generating devices.
  • the processing unit is further configured to control the operation of the multiple sound emitting devices according to the location information of the multiple areas and the location information of the multiple sound emitting devices, including: the The processing unit is used to: determine the optimal center point of the sound field, the distance between the optimal center point of the sound field and the center point of each area in the plurality of areas is equal; The distance between them controls the work of each sounding device in the plurality of sounding devices.
  • the transceiver unit is further configured to send a first instruction to the first prompt unit, and the first instruction is used to instruct the first prompt unit to prompt the sound field optimization center point location information.
  • the multiple areas are areas in the vehicle cabin.
  • the multiple areas include a front row area and a rear row area.
  • the multiple areas include a main driving area and a co-driving area.
  • the transceiver unit is further configured to send a second instruction to the second prompt unit, and the second instruction is used to instruct the second prompt unit to prompt the multiple users The location information of multiple areas where you are located.
  • the processing unit is specifically configured to: adjust the playback intensity of each of the multiple sound generating devices.
  • the multiple sound generating devices include a first sound generating device, and the processing unit is specifically configured to: control the playback intensity of the first sound generating device to be the first playback intensity;
  • the transceiver unit is also used to receive a third instruction, which is an instruction indicating to adjust the playback intensity of the first sound generating device from the first playback intensity to a second playback intensity;
  • the processing unit is also used to set the The playing intensity of the first sound generating device is adjusted to the second playing intensity.
  • the sensory information includes one or more of image information, pressure information, and sound information.
  • the electronic device may be a chip, a vehicle-mounted device (eg, a controller).
  • the transceiver unit may be an interface circuit.
  • the processing unit may be a processor, a processing device, and the like.
  • an apparatus in a fourth aspect, includes a unit for performing the method in any implementation manner of the above-mentioned first aspect.
  • a device in a fifth aspect, includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device performs any one of the possibilities in the first aspect Methods.
  • the above-mentioned processing unit may be a processor
  • the above-mentioned storage unit may be a memory
  • the memory may be a storage unit (such as a register, a cache, etc.) in the chip, or a storage unit located outside the chip in the vehicle (such as , read-only memory, random-access memory, etc.).
  • a system in a sixth aspect, includes a sensor and an electronic device, where the electronic device may be the electronic device described in any possible implementation manner of the third aspect.
  • the system further includes multiple sound generating devices.
  • a system in a seventh aspect, includes a plurality of sound emitting devices and an electronic device, where the electronic device may be the electronic device described in any possible implementation manner of the third aspect.
  • the system further includes a sensor.
  • a vehicle in an eighth aspect, there is provided a vehicle, the vehicle includes the sound generation system described in any possible implementation manner of the second aspect above, or the vehicle includes the sound generation system described in any possible implementation manner of the third aspect above. or, the vehicle includes the device described in any possible implementation of the fourth aspect, or, the vehicle includes the device described in any possible implementation of the fifth aspect, or, the vehicle It includes the system described in any possible implementation manner in the sixth aspect, or, the vehicle includes the system described in any possible implementation manner in the seventh aspect.
  • a computer program product comprising: computer program code, when the computer program code is run on a computer, the computer is made to execute the method in the above first aspect.
  • a computer-readable medium stores program codes, and when the computer program codes are run on a computer, the computer is made to execute the method of the above-mentioned first aspect.
  • Fig. 1 is a schematic functional block diagram of a vehicle provided by an embodiment of the present application.
  • Fig. 2 is a schematic structural diagram of a sound generation system provided by an embodiment of the present application.
  • Fig. 3 is another structural schematic diagram of a sound generating system provided by an embodiment of the present application.
  • Figure 4 is a top view of a vehicle.
  • Fig. 5 is a schematic flow chart of a method for controlling a sound emitting device provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of the positions of four speakers in the vehicle cabin.
  • Fig. 7 is a schematic diagram of the sound field optimization center of the speakers in the car when there are users in the driver's seat, the co-driver's seat, the left side of the second row, and the right side of the second row according to the embodiment of the present application.
  • Fig. 8 is a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • Fig. 9 is a schematic diagram of the sound field optimization center of the speaker in the car when there are users in the driver's seat, the passenger's seat and the left area of the second row provided by the embodiment of the application.
  • Fig. 10 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 11 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 12 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 13 is a schematic diagram of the sound field optimization center of the speakers in the car when there are users in the driver's seat and the left area of the second row provided by the embodiment of the present application.
  • Fig. 14 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 15 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 16 is a schematic diagram of an in-vehicle sound field optimization center provided by an embodiment of the present application when there are users in the main driving seat and the co-driving seat.
  • Fig. 17 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 18 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 19 is a schematic diagram of the interior sound field optimization center provided by the embodiment of the present application when there are users in the driver's seat and the second row on the left.
  • Fig. 20 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 21 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 22 is a schematic diagram of an in-vehicle sound field optimization center provided by an embodiment of the present application when there is a user in the main driving seat.
  • Fig. 23 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 24 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 25 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 26 is another schematic diagram of the sound field optimization center displayed on the large central control screen of the vehicle provided by the embodiment of the present application.
  • Fig. 27 is a set of graphical user interface GUI provided by the embodiment of the present application.
  • Fig. 28 is another set of GUI provided by the embodiment of the present application.
  • Fig. 29 is a schematic diagram of the method for controlling a sound generating device according to an embodiment of the present application when it is applied to a home theater.
  • Fig. 30 is a schematic diagram of a sound field optimization center under a home theater provided by an embodiment of the present application.
  • Fig. 31 is another schematic flowchart of the control method of the sound generating device provided by the embodiment of the present application.
  • Fig. 32 is a schematic structural diagram of a sounding system provided by an embodiment of the present application.
  • Fig. 33 is a schematic block diagram of a device provided by an embodiment of the present application.
  • FIG. 1 is a schematic functional block diagram of a vehicle 100 provided by an embodiment of the present application.
  • Vehicle 100 may be configured in a fully or partially autonomous driving mode.
  • the vehicle 100 can obtain its surrounding environment information through the perception system 120, and obtain an automatic driving strategy based on the analysis of the surrounding environment information to realize fully automatic driving, or present the analysis results to the user to realize partially automatic driving.
  • Vehicle 100 may include various subsystems such as infotainment system 110 , perception system 120 , decision control system 130 , drive system 140 , and computing platform 150 .
  • vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components.
  • each subsystem and component of the vehicle 100 may be interconnected in a wired or wireless manner.
  • the infotainment system 110 may include a communication system 111 , an entertainment system 112 and a navigation system 113 .
  • Communication system 111 may include a wireless communication system that may wirelessly communicate with one or more devices, either directly or via a communication network.
  • wireless communication system 146 may use third generation (3 rd generation, 3G) cellular communications, such as code division multiple access (CDMA), evolution data optimized (EVDO), global mobile Communication system (global system for mobile communication, GSM), general packet radio service (general packet radio service, GPRS), or fourth generation ( 4th generation, 4G) cellular communication, such as long term evolution (long term evolution, LTE).
  • 3G third generation
  • 3G 3G cellular communications
  • CDMA code division multiple access
  • EVDO evolution data optimized
  • GSM global mobile Communication system
  • GSM global system for mobile communication
  • GPRS general packet radio service
  • 4th generation, 4G 4th generation
  • the wireless communication system can use Wi-Fi to communicate with a wireless local area network (wireless local area network, WLAN).
  • WLAN wireless local area network
  • the wireless communication system 146 may communicate directly with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols such as various vehicle communication systems, for example, a wireless communication system may include one or more dedicated short range communications (DSRC) devices, which may include communication between vehicles and/or roadside stations Public and/or Private Data Communications.
  • DSRC dedicated short range communications
  • the entertainment system 112 can include a central control screen, a microphone and a sound system. Users can listen to the radio and play music in the car based on the entertainment system; Touch type, users can operate by touching the screen. In some cases, the user's voice signal can be acquired through the microphone, and the user can control the vehicle 100 based on the analysis of the user's voice signal, such as adjusting the temperature inside the vehicle. In other cases, music may be played to the user via a speaker.
  • the navigation system 113 may include a map service provided by a map provider, so as to provide navigation for the driving route of the vehicle 100 , and the navigation system 113 may cooperate with the global positioning system 121 and the inertial measurement unit 122 of the vehicle.
  • the map service provided by the map provider can be a two-dimensional map or a high-definition map.
  • the perception system 120 may include several kinds of sensors that sense information about the environment around the vehicle 100 .
  • the perception system 120 may include a global positioning system 121 (the global positioning system may be a GPS system, or a Beidou system or other positioning systems), an inertial measurement unit (inertial measurement unit, IMU) 122, a laser radar 123, a millimeter wave radar 124 , ultrasonic radar 125 and camera device 126 .
  • the perception system 120 may also include sensors of the interior systems of the monitored vehicle 100 (eg, interior air quality monitors, fuel gauges, oil temperature gauges, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding properties (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function for safe operation of the vehicle 100 .
  • the global positioning system 121 may be used to estimate the geographic location of the vehicle 100 .
  • the inertial measurement unit 122 is used to sense the position and orientation changes of the vehicle 100 based on inertial acceleration.
  • inertial measurement unit 122 may be a combination accelerometer and gyroscope.
  • the lidar 123 may utilize laser light to sense objects in the environment in which the vehicle 100 is located.
  • lidar 123 may include one or more laser sources, a laser scanner, and one or more detectors, among other system components.
  • the millimeter wave radar 124 may utilize radio signals to sense objects within the surrounding environment of the vehicle 100 .
  • radar 126 may be used to sense the velocity and/or heading of objects.
  • the ultrasonic radar 125 may sense objects around the vehicle 100 using ultrasonic signals.
  • the camera device 126 can be used to capture image information of the surrounding environment of the vehicle 100 .
  • the camera device 126 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the camera device 126 may include still images or video stream information.
  • the decision-making control system 130 includes a computing system 131 for analyzing and making decisions based on the information acquired by the perception system 120.
  • the decision-making control system 130 also includes a vehicle controller 132 for controlling the power system of the vehicle 100, and for controlling the steering of the vehicle 100.
  • Computing system 131 is operable to process and analyze various information acquired by perception system 120 in order to identify objects, objects, and/or features in the environment surrounding vehicle 100 .
  • the objects may include pedestrians or animals, and the objects and/or features may include traffic signals, road boundaries, and obstacles.
  • the computing system 131 may use technologies such as object recognition algorithms, structure from motion (SFM) algorithms, and video tracking. In some embodiments, computing system 131 may be used to map the environment, track objects, estimate the velocity of objects, and the like.
  • the computing system 131 can analyze various information obtained and obtain a control strategy for the vehicle.
  • the vehicle controller 132 can be used for coordinated control of the power battery and the engine 141 of the vehicle, so as to improve the power performance of the vehicle 100 .
  • the steering system 133 is operable to adjust the heading of the vehicle 100 .
  • it could be a steering wheel system.
  • the throttle 134 is used to control the operating speed of the engine 141 and thus the speed of the vehicle 100 .
  • the braking system 135 is used to control deceleration of the vehicle 100 .
  • Braking system 135 may use friction to slow wheels 144 .
  • braking system 135 may convert kinetic energy of wheels 144 into electrical current.
  • the braking system 135 may also take other forms to slow the wheels 144 to control the speed of the vehicle 100 .
  • Drive system 140 may include components that provide powered motion to vehicle 100 .
  • drive system 140 may include engine 141 , energy source 142 , transmission 143 and wheels 144 .
  • the engine 141 may be an internal combustion engine, an electric motor, an air compression engine or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • Engine 141 converts energy source 142 into mechanical energy.
  • Examples of energy source 142 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power.
  • the energy source 142 may also provide energy to other systems of the vehicle 100.
  • Transmission 143 may transmit mechanical power from engine 141 to wheels 144 .
  • Transmission 143 may include a gearbox, a differential, and a drive shaft.
  • the transmission device 143 may also include other devices, such as clutches.
  • drive shafts may include one or more axles that may be coupled to one or more wheels 121 .
  • Computing platform 150 may include at least one processor 151 that may execute instructions 153 stored in a non-transitory computer-readable medium such as memory 152 .
  • computing platform 150 may also be a plurality of computing devices that control individual components or subsystems of vehicle 100 in a distributed manner.
  • Processor 151 may be any conventional processor, such as a commercially available CPU.
  • the processor 151 may also include, for example, an image processor (graphic process unit, GPU), a field programmable gate array (field programmable gate array, FPGA), a system on chip (system on chip, SOC), an ASIC ( application specific integrated circuit, ASIC) or their combination.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of computer 110 in the same block, those of ordinary skill in the art will understand that the processor, computer, or memory may actually include Multiple processors, computers, or memories stored within the same physical enclosure.
  • the memory may be a hard drive or other storage medium located in a different housing than the computer 110 .
  • references to a processor or computer are to be understood to include references to collections of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components, may each have their own processor that only performs calculations related to component-specific functions .
  • the processor may be located remotely from the vehicle and be in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle while others are executed by a remote processor, including taking the necessary steps to perform a single maneuver.
  • memory 152 may contain instructions 153 (eg, program logic) executable by processor 151 to perform various functions of vehicle 100 .
  • Memory 152 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or controlling one or more of infotainment system 110 , perception system 120 , decision control system 130 , drive system 140 instructions.
  • memory 152 may also store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by vehicle 100 and computing platform 150 during operation of vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • Computing platform 150 may control functions of vehicle 100 based on input received from various subsystems (eg, drive system 140 , perception system 120 , and decision-making control system 130 ). For example, computing platform 150 may utilize input from decision control system 130 in order to control steering system 133 to avoid obstacles detected by perception system 120 . In some embodiments, computing platform 150 is operable to provide control over many aspects of vehicle 100 and its subsystems.
  • various subsystems eg, drive system 140 , perception system 120 , and decision-making control system 130 .
  • computing platform 150 may utilize input from decision control system 130 in order to control steering system 133 to avoid obstacles detected by perception system 120 .
  • computing platform 150 is operable to provide control over many aspects of vehicle 100 and its subsystems.
  • one or more of these components described above may be installed separately from or associated with the vehicle 100 .
  • memory 152 may exist partially or completely separate from vehicle 100 .
  • the components described above may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as limiting the embodiment of the present application.
  • An autonomous vehicle traveling on a road can identify objects within its surroundings to determine adjustments to its current speed.
  • the objects may be other vehicles, traffic control devices, or other types of objects.
  • each identified object may be considered independently and based on the object's respective characteristics, such as its current speed, acceleration, distance to the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to adjust.
  • the vehicle 100 or a sensing and computing device (e.g., computing system 131, computing platform 150) associated with the vehicle 100 may be based on the identified characteristics of the object and the state of the surrounding environment (e.g., traffic, rain, traffic on the road) ice, etc.) to predict the behavior of the identified objects.
  • each identified object is dependent on the behavior of the other, so all identified objects can also be considered together to predict the behavior of a single identified object.
  • the vehicle 100 is able to adjust its speed based on the predicted behavior of the identified object.
  • the self-driving car is able to determine what steady state the vehicle will need to adjust to (eg, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • other factors may also be considered to determine the speed of the vehicle 100 , such as the lateral position of the vehicle 100 in the traveling road, the curvature of the road, the proximity of static and dynamic objects, and the like.
  • the computing device may also provide instructions to modify the steering angle of the vehicle 100 such that the self-driving car follows a given trajectory and/or maintains contact with objects in the vicinity of the self-driving car (e.g., , the safe lateral and longitudinal distances of cars in adjacent lanes on the road.
  • objects in the vicinity of the self-driving car e.g., , the safe lateral and longitudinal distances of cars in adjacent lanes on the road.
  • the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, etc., the embodiment of the present application There is no particular limitation.
  • the embodiment of the present application provides a control method of a sound generating device, a sound generating system, and a vehicle. By identifying the location information of the area where the user is located, the sound field optimization center is automatically adjusted, so that each user can achieve a good listening effect.
  • Fig. 2 shows a schematic structural diagram of a sound emitting system provided by an embodiment of the present application.
  • the sounding system can be a controller area network (controller area network, CAN) control system, and the CAN control system can include multiple sensors (for example, sensor 1, sensor 2, etc.), multiple electronic control units (electronic control unit, ECU) ), car entertainment host, speaker controller and speaker.
  • CAN controller area network
  • ECU electronic control unit
  • sensors include but are not limited to cameras, microphones, ultrasonic radars, millimeter-wave radars, lidars, vehicle speed sensors, motor power sensors, and engine speed sensors, etc.;
  • the ECU is used to receive data collected by sensors and execute corresponding commands. After obtaining periodic signals or event signals, the ECU can send these signals to the public CAN network, where the ECU includes but is not limited to vehicle controllers, hybrid controllers, automatic transmission controllers, and automatic driving controllers;
  • the entertainment host is used to capture the periodic signal or event signal sent by each ECU on the public CAN network, and executes the corresponding operation or forwards the signal to the speaker controller when the corresponding signal is recognized;
  • the speaker controller is used to collect The command signal from the vehicle entertainment head unit adjusts the speaker.
  • the vehicle entertainment host can capture the image information collected by the camera from the CAN bus.
  • the car entertainment host can judge whether there are users in multiple areas in the car through the image information, and send the user's location information to the speaker controller.
  • the speaker controller can control the playback intensity of each speaker according to the location information of the user.
  • Fig. 3 shows another schematic structural diagram of an in-vehicle sound generation system provided by an embodiment of the present application.
  • the sound system can be a ring network communication architecture, and all sensors and actuators (such as speakers, ambient lights, air conditioners, motors and other components that receive commands and execute commands) can be connected to the vehicle integration unit (VIU) nearby.
  • VU vehicle integration unit
  • VIU as a communication interface unit, can be deployed in densely populated locations of vehicle sensors and actuators, so that the sensors and actuators of the vehicle can be connected nearby; at the same time, VIU can have certain computing and driving capabilities (for example, VIU can absorb part of the execution The driving calculation function of the device); sensors include but are not limited to cameras, microphones, ultrasonic radars, millimeter wave radars, lidars, vehicle speed sensors, motor power sensors, and engine speed sensors.
  • VIU communicates with each other through networking, intelligent driving computing platform/mobile data center (MDC), vehicle domain controller (vehicle domain controller, VDC) and smart cockpit domain controller (cockpit domain controller, CDC) Redundant access to the ring network communication network composed of VIUs respectively.
  • MDC intelligent driving computing platform/mobile data center
  • VDC vehicle domain controller
  • VCDC vehicle domain controller
  • CDC cockpit domain controller
  • the sensor for example, a camera
  • VIU can send the collected data to the VIU.
  • VIU can publish data to the ring network, and MDC, VDC and CDC collect relevant data on the ring network, calculate and convert them into signals including user location information and publish them to the ring network. Control the playback intensity of the speaker through the corresponding computing power and driving power in VIU.
  • VIU1 is used to drive speaker 1
  • VIU2 is used to drive speaker 2
  • VIU3 is used to drive speaker 3
  • VIU4 is used to drive speaker 4.
  • the layout of the VIU can be independent of the speaker, for example, VIU1 can be arranged at the left rear of the vehicle, and speaker 1 can be arranged near the door on the driver's side.
  • Sensors or actuators can be connected to the nearest VIU to save wiring harnesses. Due to the limitation of the number of interfaces of MDC, VDC and CDC, the VIU can undertake the access of multiple sensors and multiple actuators, so as to perform the functions of interface and communication.
  • VIU the sensor or controller is connected to and which controller is controlled by the sound system may be set at the factory, or may be defined by the user, and its hardware Replaceable and upgradeable.
  • the VIU can absorb the driving calculation functions of some sensors and actuators, so that when some actuators (for example, CDC, VDC) fail, the VIU can directly process the data collected by the sensors, and then perform control.
  • actuators for example, CDC, VDC
  • the communication architecture shown in FIG. 3 may be an intelligent digital vehicle platform (intelligent digital vehicle platform, IDVP) ring network communication architecture.
  • IDVP intelligent digital vehicle platform
  • Figure 4 shows a top view of a vehicle.
  • position 1 is the main driver's seat
  • position 2 is the co-driver's position
  • positions 3-5 are the rear area
  • positions 6a-6d are where the four speakers in the car are located
  • position 7 is where the camera in the car is located
  • position 8 is where the CDC and the vehicle's central control panel are located.
  • the speaker can be used to play the media sound in the car
  • the camera in the car can be used to detect the position of the passengers in the car
  • the car central control screen can be used to display image information and the interface of the application program
  • CDC is used to connect various peripherals, It also provides data analysis and processing capabilities.
  • FIG. 4 is only illustrated by taking the speakers at positions 6a-6d located near the main driver's door, near the passenger's door, near the left door of the second row, and near the right door of the second row as an example.
  • the position of the loudspeaker is not specifically limited.
  • the loudspeaker can also be located near the door of the vehicle, near the large central control screen, the ceiling, the floor, and the seat (for example, on the pillow of the seat).
  • Fig. 5 shows a schematic flow chart of a method 500 for controlling a sound emitting device provided by an embodiment of the present application.
  • the method 500 may be applied in a vehicle including a plurality of sound emitting devices (eg, speakers), the method 500 includes:
  • the vehicle acquires the location information of the user.
  • the vehicle can obtain the image information of various areas in the vehicle (such as the main driver's seat, the passenger seat, and the rear area) by activating the in-vehicle camera, and use the image information in each area to determine whether there are user.
  • the vehicle can analyze the outline of a human face through the image information collected by the camera, so that the vehicle can determine whether there is a user in the area.
  • the vehicle can analyze the iris information of the human eye contained in the image information collected by the camera, so that the vehicle can determine that there is a user in the area.
  • the vehicle when the vehicle detects that the user turns on the sound field adaptive switch, the vehicle can start the camera to acquire image information of various areas in the vehicle.
  • the user can select the setting option through the large central control screen and enter the sound effect function interface, and can choose to turn on the sound field adaptive switch in the sound effect function interface.
  • the vehicle can also detect whether there is a user in the current area through the pressure sensor under the seat. Exemplarily, when the pressure value detected by the pressure sensor on the seat in a certain area is greater than or equal to a preset value, it may be determined that there is a user in the area.
  • the vehicle can also determine sound source location information through the audio information acquired by the microphone array, so as to determine in which areas users exist.
  • the acquisition of the position information of the user in the vehicle by the vehicle can also be achieved through a combination of one or more of the above-mentioned in-vehicle camera, pressure sensor, and microphone sub-array.
  • data collected by sensors may be transmitted to the CDC, and the CDC may process the data to determine which areas there are users.
  • CDC can convert it into a marked position. For example, when only the main driver's seat is occupied, the CDC can output 1000; when only the passenger's seat is occupied, the CDC can output 0100; CDC can output 0010 when there are people in the left side of the row; CDC can output 1100 when there are people in the main driver's seat and the passenger seat; and 1110 when there are people in the driver's seat, passenger seat and the second row on the left.
  • the division of the areas in which the user sits in the car into the main driving seat, the co-driving seat, the second row on the left and the second row on the right is described as an example.
  • the embodiment of the present application is not limited thereto.
  • the area inside the car can also be divided into the main driver's seat, the passenger's seat, the second row left, the second row in the middle, and the second row's right; for another example, for a 7-seater SUV, the area inside the car can also be divided into the main driver's seat and the passenger's seat. 2nd row left, 2nd row right, 3rd row left and 3rd row right.
  • the area inside the car may be divided into a front row area and a rear row area.
  • the area in the car can be divided into a driving area, a passenger area, and so on.
  • the vehicle adjusts the sound generating device according to the location information of the user.
  • the sound generating device is a speaker as an example, and the description will be made in conjunction with the speakers 6a-6d in FIG. 2 .
  • Figure 6 shows the location of the 4 speakers. Take the figure formed by the connection of the points where the four speakers are located is a rectangle ABCD as an example. On the rectangle ABCD, the speaker 1 is set at point A, the speaker 2 is set at point B, the speaker 3 is set at point C, and the speaker is set at point D 4. Point O is the center point of the rectangle ABCD (the distances from point O to points A, B, C, and D are equal). It should be understood that different cars have different positions and numbers of speakers. During specific implementation, a specific adjustment method can be designed according to the model of the car or the setting of the speakers in the car, which is not limited in this application.
  • Fig. 7 shows a schematic diagram of the sound field optimization center of the speakers in the car when there are users in the driver's seat, the co-driver's seat, the left side of the second row, and the right side of the second row provided by the embodiment of the present application.
  • the center point of each area can form a rectangle EFGH, and the center point of the rectangle EFGH can be point Q.
  • the Q point can optimize the center point for the sound field in the current car.
  • point Q may coincide with point O. Since the distance from the center point Q of the rectangle EFGH to the four speakers is equal, the vehicle can control the four speakers to play at the same intensity (for example, the four speakers all play at p).
  • the vehicle can control the playback intensity of the four speakers according to the distance between the Q point and the four speakers.
  • the vehicle can control the playback intensity of speaker 1 as
  • the vehicle can control the playback intensity of speaker 2 to be
  • the vehicle can control the playback intensity of the speaker 3 to be
  • the vehicle can control the playing intensity of the loudspeaker 4 to be
  • Fig. 8 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the large screen in the central control can prompt the user to "detect the driver's seat, the passenger seat, the left side of the second row, and the right side of the second row.”
  • the current sound field optimization center point can be the center point of the area where the main driver's seat is located, the center point of the area where the passenger seat is located, the center point of the area where the second row is left, and the center point of the area where the second row is right. Points with equal distance from center point.
  • Fig. 9 shows a schematic diagram of the sound field optimization center of the speaker in the car when there are users in the driver's seat, the passenger's seat and the left area of the second row provided by the embodiment of the present application.
  • the center points of the area where the main driver's seat, the co-driver's seat, and the left side of the second row can form a triangle EFG, wherein the outer center of the triangle EFG can be point Q.
  • the Q point can optimize the center point for the sound field in the current car.
  • point Q may coincide with point O. Since the distance from the center point Q of the rectangle EFGH to the four speakers is equal, the vehicle can control the four speakers to play at the same intensity (for example, the four speakers all play at p).
  • the Q point and the O point may also not coincide.
  • the manner in which the vehicle controls the playback intensity of the four speakers can refer to the description in the above embodiment, and will not be repeated here.
  • Fig. 10 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the large central control screen can prompt the user to "detect people in the main driving seat, the passenger seat, and the second row left", and at the same time remind the user
  • the current sound field optimization center point may be a point that is equidistant from the center point of the area where the main driver's seat is located, the center point of the area where the passenger seat is located, and the center point of the area where the second row left is located.
  • Fig. 11 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the large central control screen can prompt the user to "detect someone in the driver's seat, the passenger seat, and the right side of the second row", and at the same time remind the user
  • the current sound field optimization center point may be a point that is equidistant from the center point of the area where the main driver's seat is located, the center point of the area where the passenger seat is located, and the center point of the area where the second row right is located.
  • Fig. 12 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there are users in the main driving seat, the second row left and the second row right area in the car, it can prompt the user through the large central control screen to "detect someone in the main driving seat, second row left and second row right", and at the same time
  • the user is prompted that the current center point of sound field optimization may be a point that is equidistant from the center point of the area where the main driving seat is located, the center point of the area where the second row left is located, and the center point of the area where the second row right is located.
  • Fig. 13 shows a schematic diagram of the sound field optimization center of the speakers in the car when there are users in the driver's seat and the left area of the second row provided by the embodiment of the present application.
  • the line connecting the center point of the driver's seat and the area on the left side of the second row is EG, and the midpoint of EG can be point Q.
  • the Q point can optimize the center point for the sound field in the current car.
  • point Q may coincide with point O. Since the distance between the midpoint of the line segment EH and the four speakers is equal, the vehicle can control the four speakers to play at the same intensity (for example, the four speakers all play at p).
  • the vehicle can control the playback intensity of the four speakers according to the distance between the Q point and the four speakers.
  • the vehicle can control the playback intensity of the four speakers according to the distance between the Q point and the four speakers.
  • Fig. 14 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the large central control screen can prompt the user to "detect someone in the main driving seat and the right side of the second row", and at the same time remind the user of the current sound field optimization center.
  • the point may be a point at the same distance from the center point of the area where the main driving seat is located and the center point of the area where the second row right is located.
  • Fig. 15 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there are users in the passenger seat and the left area of the second row in the car, it can prompt the user through the large central control screen that "there is a person detected in the passenger seat and the second row left", and at the same time remind the user that the current sound field optimization center point can be A point at the same distance from the center point of the area where the passenger seat is located and the center point of the area where the second row left is located.
  • Fig. 16 shows a schematic diagram of an in-vehicle sound field optimization center provided by an embodiment of the present application when there are users in the main driving seat and the co-driving seat.
  • the line connecting the center points of the areas where the main driver's seat and the co-driver's seat are located is EF, where the midpoint of EF may be point P.
  • Point P can optimize the center point for the current interior sound field.
  • Fig. 17 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there are users in the main driving seat and the co-driving seat in the car, it can prompt the user through the large central control screen that "people have been detected in the main driving seat and the co-driving seat", and at the same time remind the user that the current sound field optimization center point can be a distance from the main driving seat.
  • the point where the center point of the area where the driver's seat is located is equal to the center point of the area where the passenger seat is located.
  • the vehicle can control the playback intensity of the four speakers according to the distance between the point P and the four speakers.
  • the vehicle can control the playback intensity of speaker 1 as
  • the vehicle can control the playback intensity of speaker 2 to be
  • the vehicle can control the playback intensity of the speaker 3 to be
  • the vehicle can control the playing intensity of the loudspeaker 4 to be
  • Fig. 18 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there are users in the left area of the second row and the right area of the second row in the car, it can prompt the user through the large central control screen to "detect people in the second row left and second row right", and at the same time remind the user of the current sound field optimization
  • the center point can be a point at the same distance from the center point of the area where the second row is left and the center point of the area where the second row is right.
  • Fig. 19 shows a schematic diagram of the interior sound field optimization center provided by the embodiment of the present application when there are users in the driver's seat and the second row on the left.
  • the line connecting the center point of the driver's seat and the area where the second row left is located is EH, where the midpoint of EH can be point R.
  • the R point can optimize the center point for the current sound field in the car.
  • Fig. 20 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there are users in the main driving seat and the left area of the second row in the car, it can prompt the user through the large central control screen that "someone has been detected in the main driving seat and the second row left", and at the same time remind the user of the current sound field optimization center point It may be a point at the same distance from the center point of the area where the main driving seat is located and the center point of the area where the second row left is located.
  • the vehicle can control the playback intensity of the four speakers according to the distance between point R and the four speakers.
  • the vehicle can control the playback intensity of speaker 1 as
  • the vehicle can control the playback intensity of speaker 2 to be
  • the vehicle can control the playback intensity of the speaker 3 to be
  • the vehicle can control the playing intensity of the loudspeaker 4 to be
  • Fig. 21 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the large central control screen can prompt the user to "detect someone in the passenger seat and the second row right", and at the same time remind the user that the current sound field optimization center point can be A point at the same distance from the center point of the area where the front passenger seat is located and the center point of the area where the second row right is located.
  • Fig. 22 shows a schematic diagram of an in-vehicle sound field optimization center provided by an embodiment of the present application when there is a user in the main driving seat.
  • the center point of the area where the main driving seat is located is point E, where point E can be the center point for optimizing the sound field in the current car.
  • Fig. 23 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there is a user in the main driving seat in the car, it can prompt the user through the large central control screen to "detect someone in the main driving seat", and at the same time remind the user that the current sound field optimization center point can be the center point of the area where the main driving seat is located .
  • the vehicle can control the playback intensity of the four speakers according to the distance between point E and the four speakers.
  • the vehicle can control the playback intensity of speaker 1 as
  • the vehicle can control the playback intensity of speaker 2 to be
  • the vehicle can control the playback intensity of the speaker 3 to be
  • the vehicle can control the playing intensity of the loudspeaker 4 to be
  • Fig. 24 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there is a user in the passenger seat in the car, it can prompt the user through the large central control screen to "detect someone in the passenger seat", and at the same time remind the user that the current sound field optimization center point can be the center point of the area where the passenger seat is located.
  • Fig. 25 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there is a user in the left area of the second row in the car, it can prompt the user through the large central control screen to "detect someone on the left of the second row", and at the same time remind the user that the current sound field optimization center point can be the area where the second row is located. center point.
  • Fig. 26 shows a schematic diagram of a sound field optimization center displayed on a large central control screen of a vehicle provided by an embodiment of the present application.
  • the vehicle detects that there is a user in the right area of the second row in the car, it can prompt the user through the large central control screen to "detect someone on the right of the second row", and at the same time remind the user that the current sound field optimization center point can be the area where the second row is located. center point.
  • the above description is made by taking the location information of the user as the center point of the area where the user is located in S501 as an example, and this embodiment of the present application is not limited thereto.
  • the user's location information may also be other preset points in the area where the user is located, or the user's location information may also be a certain point in the area calculated according to a preset rule (eg, a preset algorithm).
  • the location information of the user may also be determined according to the location information of the user's ears.
  • the image information collected by the camera device can determine the position information of the user's ear.
  • the user's ear position information is the midpoint of the line connecting the first point and the second point, wherein the first point is a certain point on the user's left ear, and the second point is a certain point on the user's right ear.
  • the location information of the area may be determined according to the location information of the user's human ear or the location information of the auricle of the human ear. 27 and 28, when the vehicle has adjusted the playback intensity of multiple sound generating devices through the user's location information, the process of the user manually adjusting the playback intensity of a certain speaker is described below.
  • FIG. 27 shows a set of graphical user interface (graphical user interface, GUI) provided by the embodiment of the present application.
  • the vehicle can prompt the user through the HMI to "detect the main driving seat, the auxiliary driving seat, There are people in the second row left and the second row right" and remind the user of the current sound field optimization center.
  • the smiling faces on the driver's seat, the passenger seat, the second row left, and the second row upper right indicate that there are users in the area.
  • an icon 2701 (for example, a trash can icon) can be displayed through the HMI.
  • the vehicle detects that the user drags the smiling face in the left area of the second row to the icon 2701, the vehicle can display a GUI as shown in (b) in FIG. 27 through the HMI.
  • the vehicle can prompt the user through the HMI that "the left area of the second row has been moved to the left area for you.” speaker volume drops to 0”.
  • the playback intensity of the current four speakers can be p.
  • the vehicle detects that the user drags the smiling face in the left area of the second row to the icon 2701, the vehicle can reduce the playback intensity of the speaker in the left area of the second row to 0, or reduce from p to 0.1p.
  • the embodiment of the application does not limit this.
  • Fig. 28 shows a set of GUI provided by the embodiment of the present application.
  • the vehicle can prompt the user through the HMI to "detect the main driving seat, the auxiliary driving seat, There are people in the second row left and the second row right" and remind the user of the current sound field optimization center.
  • a scroll bar 2801 of playback intensity may be displayed.
  • the playback intensity scroll bar 2801 may include a scroll block 2802 .
  • the vehicle in response to detecting that the user’s finger slides upward in the left area of the second row, the vehicle can increase the playback intensity of the speakers near the left area of the second row and display scrolling on the HMI.
  • Block 2802 moves up.
  • the playback intensity of the speakers near the left area of the second row may be increased from p to 1.5p.
  • the vehicle can prompt the user through the HMI that "the speaker volume in the left area of the second row has been increased for you".
  • the vehicle adjusts the playback intensity of the first sound generating device to the first playback intensity
  • it detects that the user adjusts the playback intensity of a certain speaker from the first playback intensity to the second playback intensity it can The playback intensity of the speaker is adjusted to the second playback intensity.
  • the user can quickly complete the adjustment of the playback intensity of the speakers in the area, so that the speakers in the area can better meet the user's listening effect.
  • the vehicle can also determine the state of the user in a certain area through the image information collected by the camera, so as to adjust the playback intensity of the speakers near the area in combination with the location information of the area and the state of the user.
  • the vehicle may control the playback intensity of the speakers near the left area of the second row to be 0 or other values.
  • the second playback intensity may also be a default playback intensity (for example, the second playback intensity is 0).
  • the vehicle detects the user's preset operation in a certain area (for example, the left area of the second row) on the large central control screen, the vehicle can adjust the playback intensity of the speakers in this area from the first playback intensity to the default playback intensity .
  • the preset operation includes but is not limited to detection of a user's long-press operation in this area (for example, long-press on a seat in the left area of the second row), sliding or clicking operations in this area, and the like.
  • Fig. 29 shows a schematic diagram when the method for controlling a sound generating device according to an embodiment of the present application is applied to a home theater.
  • the home theater may include a speaker 1 , a speaker 2 and a speaker 3 .
  • the sound system in the home theater can adjust the three speakers by detecting the positional relationship between the user and the three speakers.
  • FIG. 30 shows a schematic diagram of a sound field optimization center under a home theater provided by an embodiment of the present application.
  • the speaker 1 is set at point A on the triangle ABC
  • the speaker 2 is set at point B
  • the speaker 3 is set at point C, wherein , O is the circumcenter of the triangle ABC.
  • the sound system can control the playback intensity of the speaker 1, the speaker 2, and the speaker 3 to be the same (for example, the playback intensities of the 3 speakers are the same). for p).
  • the sound system can adjust the playback intensity of the three speakers according to the positional relationship between the user's area and the three speakers .
  • the sound system can control the playback intensity of speaker 1 as
  • the sound system can control the playback intensity of speaker 2 as
  • the sound system can control the playback intensity of the speaker 3 to be
  • Fig. 31 shows a schematic flowchart of a method 3100 for controlling a sound emitting device provided by an embodiment of the present application.
  • the method 3100 may be applied in the first device.
  • the method 3100 includes:
  • the first device acquires location information of multiple areas where multiple users are located.
  • the acquiring location information of multiple areas where multiple users are located by the first device includes: acquiring sensory information; determining the location information of the multiple areas according to the sensory information, where the sensory information includes image One or more of information, pressure information and sound information.
  • the sensor information may include image information.
  • the first device can acquire image information through an image sensor.
  • the vehicle may determine whether the image information includes human face contour information, human ear information, or iris information through the image information collected by the camera device.
  • the vehicle can obtain image information in the main driving area collected by the driver's camera. If the vehicle can determine that the image information includes one or more of human face contour information, human ear information or iris information, then the first device can determine that there is a user in the main driving area.
  • the embodiment of the present application does not limit the implementation process of determining one or more of human face contour information, human ear information, or iris information included in the image information.
  • a vehicle could feed image information into a neural network to classify the area as including the user's face.
  • the vehicle can also establish a coordinate system for the main driving area.
  • the vehicle can collect image information of multiple coordinate points in the coordinate system through the driver's camera, and then analyze whether there is human characteristic information at the multiple coordinate points. If there is human characteristic information, the vehicle can determine that there is a user in the main driving area.
  • the first device is a vehicle, and the sensory information may be pressure information.
  • the sensory information may be pressure information.
  • a pressure sensor is included under each seat in the vehicle, and the first device acquires location information of multiple areas where multiple users are located, including: the pressure information (for example, pressure value) collected by the first device through the pressure sensor , to obtain location information of multiple regions where multiple users are located.
  • the vehicle may determine that there is a user in the main driving area.
  • the sensing information may be sound information.
  • the acquisition by the first device of location information of multiple areas where multiple users are located includes: the first device acquires location information of multiple areas where the multiple users are located through sound signals collected by a microphone array. For example, the first device may locate the user based on the sound signals collected by the microphone array. If the first device determines that the user is located in a certain area according to the sound signal, the first device may determine that there is a user in the area.
  • the first device may also combine at least two of image information, pressure information, and sound information to determine whether there is a user in the area.
  • the first device For example, take the first device as a vehicle as an example.
  • the image information collected by the main driving camera and the pressure information collected by the pressure sensor in the main driving seat can be obtained. If it is determined through the image information collected by the main driving camera that the image information includes face information and the pressure value collected by the pressure sensor in the main driving seat is greater than or equal to the first threshold, then the vehicle can determine that there is a user in the main driving area.
  • the camera may acquire image information in the area and pick up sound information in the environment through a microphone array. If the image information of the area collected by the camera determines that the image information includes face information and the sound information collected by the microphone array determines that the sound comes from the area, then the vehicle can determine that there is a user in the area.
  • the first device controls the multiple sound emitting devices to work according to the location information of the multiple areas where the multiple users are located and the location information of the multiple sound generating devices.
  • the location information of the multiple areas where the multiple users are located may include the center point of each of the multiple areas, or the preset point of each of the multiple areas, or the , points obtained according to preset rules (eg, preset algorithms).
  • preset rules eg, preset algorithms
  • the first device controls the multiple sound emitting devices to work according to the location information of the multiple regions and the location information of the multiple sound emitting devices, including: controlling the multiple sound emitting devices according to the location information and the mapping relationship of the multiple regions Multiple sound generating devices work, and the mapping relationship is a mapping relationship between the positions of multiple areas and the playback intensities of the multiple sound generating devices.
  • Table 1 shows the mapping relationship between the positions of multiple regions and the playback intensities of multiple sound generating devices.
  • mapping relationship between the positions shown in Table 1 and the playback intensities of multiple sound emitting devices is only schematic, and the division method of the regions and the playback intensities of the speakers are not limited in this embodiment of the present application.
  • the first device controls the operation of the multiple sound generating devices according to the location information of the multiple areas and the location information of the multiple sound generating devices, including: the first device determines a sound field optimization center point, and the sound field optimization center The distance from the point to the central point of each area in the plurality of areas is equal; the first device controls the distance between the sound field optimization center point and each sounding device in the plurality of sounding devices.
  • Each sound generating device works.
  • the method further includes: the first device prompting position information of the sound field optimization center point.
  • the first device prompts the location information of the current sound field optimization center through the HMI, sound or ambient light.
  • the plurality of areas are areas within the vehicle cabin.
  • the multiple areas include a front row area and a rear row area.
  • the multiple areas may include a main driving area and a co-driving area.
  • the method further includes: the first device prompting location information of multiple areas where the multiple users are located.
  • the first device controls the operation of the plurality of sounding devices, including:
  • the first device adjusts the playback intensity of each sound generating device in the plurality of sound generating devices.
  • the plurality of sound generating devices include a first sound generating device, and the first device adjusts the playing intensity of each sound generating device in the plurality of sound generating devices, including: the first device controls the playing of the first sound generating device The intensity is the first playback intensity; wherein, the method further includes: the first device acquires an instruction from the user to adjust the playback intensity of the first sound generating device from the first playback intensity to the second playback intensity; in response to acquiring the instruction instruction, the first device adjusts the playback intensity of the first sound generating device to the second playback intensity.
  • the vehicle can control the playback intensity of the four speakers to be p .
  • the vehicle detects that the user drags the smiling face in the left area of the second row to the icon 2701 on the HMI, the vehicle can adjust the playback intensity of the speakers near the left area of the second row from p to 0.
  • the vehicle can control the playback intensity of the four speakers to be p .
  • the vehicle detects that the user slides up the left area of the second row on the HMI, the vehicle can adjust the playback intensity of the speakers near the left area of the second row from p to 1.5p.
  • the first device obtains the location information of multiple areas where multiple users are located, and controls the work of multiple sound emitting devices according to the location information of multiple areas and the location information of multiple sound emitting devices, without the need for users to manually Adjusting the sound generating device helps to reduce the user's learning cost and reduce the user's cumbersome operations; at the same time, it also helps multiple users to enjoy a good listening effect, which helps to improve the user experience.
  • Fig. 32 shows a schematic structural diagram of a sound generating system 3200 provided by an embodiment of the present application.
  • the sounding system may include a sensor 3201, a controller 3202, and a plurality of sounding devices 3203, wherein the sounding system includes a sensor, a controller, and a plurality of sounding devices, wherein,
  • a sensor 3201 is used to collect data and send the data to the controller
  • the controller 3202 is configured to obtain location information of multiple areas where multiple users are located according to the data; and control the multiple sound emitting devices 3203 to work according to the location information of the multiple areas and the location information of the multiple sound emitting devices.
  • the data includes at least one of image information, pressure information and sound information.
  • the controller 3202 is specifically configured to: determine the optimal center point of the sound field, the distance from the optimal center point of the sound field to the center point of each area in the plurality of areas is equal; The distance between each of the sound emitting devices controls the work of each of the plurality of sound generating devices.
  • the controller 3202 is further configured to send a first instruction to the first prompting device, where the first instruction is used to instruct the first prompting device to prompt the location information of the sound field optimization center point.
  • the plurality of areas are areas within the vehicle cabin.
  • the multiple areas include a front row area and a rear row area.
  • the multiple areas may include a main driving area and a co-driving area.
  • controller 3202 is further configured to send a second instruction to the second prompting device, where the second instruction is used to instruct the second prompting device to prompt the location information of multiple areas where the multiple users are located.
  • the controller 3202 is specifically configured to: adjust the playback intensity of each sound generating device in the plurality of sound generating devices 3203 .
  • the multiple sound generating devices 3203 include a first sound generating device, and the controller 3202 is specifically configured to: control the playback intensity of the first sound generating device to be the first playback intensity; the controller 3202 is also configured to obtain the user's A third instruction for adjusting the play intensity of the first sound generating device from the first play intensity to a second play intensity; in response to acquiring the third instruction, adjusting the play intensity of the first sound generating device to the second play intensity.
  • the device 3300 includes a transceiver unit 3301 and a processing unit 3302, wherein the transceiver unit 3301 is used to receive sensing information; the processing unit 3302 is used to , to obtain the location information of the multiple areas where the multiple users are located; the processing unit 3302 is further configured to control the work of the multiple sound emitting devices according to the location information of the multiple areas where the multiple users are located and the position information of the multiple sound emitting devices .
  • the processing unit 3302 is further configured to control the operation of the plurality of sound emitting devices according to the position information of the multiple regions and the position information of the plurality of sound generating devices, including: the processing unit 3302 is configured to: determine the sound field optimization center point , the distance from the sound field optimization center point to the center point of each area in the plurality of areas is equal; according to the distance between the sound field optimization center point and each sound generation device in the plurality of sound generation devices, control the plurality of sound generation devices Each sound generating device in the work.
  • the transceiving unit 3301 is further configured to send a first instruction to the first prompting unit, where the first instruction is used to instruct the first prompting unit to prompt the location information of the sound field optimization center point.
  • the plurality of areas are areas within the vehicle cabin.
  • the multiple areas include a front row area and a rear row area.
  • the multiple areas include a main driving area and a co-driving area.
  • the transceiving unit 3301 is further configured to send a second instruction to the second prompting unit, where the second instruction is used to instruct the second prompting unit to prompt the location information of multiple areas where the multiple users are located.
  • the processing unit 3302 is specifically configured to: adjust the playback intensity of each sound-generating device among the plurality of sound-generating devices.
  • the multiple sound generating devices include a first sound generating device
  • the processing unit 3302 is specifically configured to: control the playback intensity of the first sound generating device to be the first playback intensity;
  • the transceiver unit 3301 is also configured to receive a third instruction , the third instruction is an instruction indicating to adjust the playing intensity of the first sounding device from the first playing intensity to a second playing intensity;
  • the processing unit 3302 is also configured to adjust the playing intensity of the first sounding device to The second playback intensity.
  • the sensory information includes one or more of image information, pressure information and sound information.
  • the embodiment of the present application also provides a device, which includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device executes the above-mentioned control method of the sound generating device.
  • the above-mentioned processing unit may be the processor 151 shown in FIG. 1, and the above-mentioned storage unit may be the memory 152 shown in FIG. It may also be a storage unit (for example, read-only memory, random access memory, etc.) located outside the chip in the vehicle.
  • the embodiment of the present application also provides a vehicle, including the above-mentioned sound generating system 3200 or the above-mentioned device 3300 .
  • the embodiment of the present application also provides a computer program product, the computer program product including: computer program code, when the computer program code is run on the computer, the computer is made to execute the above method.
  • the embodiment of the present application also provides a computer-readable medium, the computer-readable medium stores program codes, and when the computer program codes are run on a computer, the computer is made to execute the above method.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 151 or instructions in the form of software.
  • the methods disclosed in the embodiments of the present application can be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor 151.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor 151 reads the information in the memory 152, and completes the steps of the above method in combination with its hardware. To avoid repetition, no detailed description is given here.
  • the processor 151 may be a central processing unit (central processing unit, CPU), and the processor 151 may also be other general-purpose processors, digital signal processors (digital signal processor, DSP), Application specific integrated circuit (ASIC), off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the memory 152 may include a read-only memory and a random access memory, and provide instructions and data to the processor.
  • sequence numbers of the above-mentioned processes do not mean the order of execution, and the execution order of the processes should be determined by their functions and internal logic, and should not be used in the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are realized in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

本申请提供了一种发声装置的控制方法、发声***以及车辆,该方法包括:第一设备获取多个用户所在的多个区域的位置信息;该第一设备根据该多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作。本申请实施例中,无需用户手动调节发声装置,有助于降低用户的学习成本以及减少用户繁琐的操作;同时,有助于多个用户均享受到好的听音效果,有助于提升用户的使用体验。

Description

一种发声装置的控制方法、发声***以及车辆
本申请要求于2021年6月30日提交中国专利局、申请号为202110744208.4、申请名称为“一种发声装置的控制方法、发声***以及车辆”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及智能车领域,并且更具体地,涉及一种发声装置的控制方法、发声***以及车辆。
背景技术
随着人民生活水平的提高,车辆已经成为人们出行的重要交通工具。在车辆行驶或者等待的时候,人们喜欢听听音乐、广播,有时候还会看看电影,刷刷短视频。因此,车内的声场效果已经成为大家关注的重要因素,好的声音效果总能给人带来舒适的体验。
当前用户需要通过手动调节来控制各个扬声器的播放强度,从而达到目标点位置的声场最优。如果需要驾驶员手动去调节,就需要驾驶员的视线转移到屏幕上。这对于驾驶来说,存在一定安全隐患。与此同时,当车内乘客换位置或者数量增加的时候,需要不断的去手动调节各个扬声器,这样会导致用户的体验不好。
发明内容
本申请实施例提供一种发声装置的控制方法、发声***以及车辆,通过获取用户所在的区域的位置信息来自适应调节声场的优化中心,有助于提升用户的听音体验。
第一方面,提供了一种发声装置的控制方法,该方法包括:第一设备获取多个用户所在的多个区域的位置信息;第一设备根据该多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作。
本申请实施例中,第一设备通过获取多个用户所在的多个区域的位置信息,并根据多个区域的位置信息以及多个发声装置的位置信息,控制多个发声装置工作,无需用户手动调节发声装置,有助于降低用户的学习成本以及减少用户繁琐的操作;同时,也有助于多个用户均可以享受到好的听音效果,有助于提升用户的使用体验。
在一些可能的实现方式中,该第一设备可以为车辆、家庭影院中的发声***或者KTV中的发声***。
在一些可能的实现方式中,当该第一设备获取该多个用户所在的区域的位置信息之前,该方法还包括:该第一设备检测到用户的第一操作。
在一些可能的实现方式中,该第一操作为用户控制第一设备播放音频内容的操作,或者,该第一操作为用户将第二设备与第一设备连接,并通过第一设备播放第二设备上的音频内容的操作,或者,该第一设备为用户打开声场自适应开关的操作。
结合第一方面,在第一方面的某些实现方式中,该第一设备获取多个用户所在的多个区域的位置信息,包括:该第一设备根据采集到传感信息,确定该多个用户所在的区域的位置信息。其中,传感信息可以是图像信息、声音信息、压力信息中的一个或多个。图像信息可以由图像传感器采集得到,例如,摄像装置、雷达等。声音信息可以由声音传感器采集得到,例如,麦克风阵列。压力信息可以由压力传感器采集,例如,安装在座椅内的压力传感器。另外,上述传感信息可以是由传感器采集得到的数据,也可以是根据传感器采集到的数据得到的信息。
结合第一方面,在第一方面的某些实现方式中,该第一设备获取多个用户所在的多个区域的位置信息,包括:该第一设备根据图像传感器采集的数据,确定该多个区域的位置信息;或者,该第一设备根据压力传感器采集的数据,确定该多个区域的位置信息;或者,该第一设备根据声音传感器采集的数据,确定该多个区域的位置信息。
本申请实施例中,通过多个用户所在的多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作,这样可以简化第一设备在控制多个发声装置工作时的计算过程,从而使得第一设备可以更加便捷地控制多个发声装置。
在一些可能的实现方式中,该图像传感器可以包括摄像头、激光雷达等。
在一些可能的实现方式中,该图像传感器可以通过采集区域中的图像信息并通过图像信息来确定该图像信息中是否包含人脸的轮廓信息、人耳信息、虹膜信息等来判断该区域中是否存在用户。
在一些可能的实现方式中,该声音传感器可以包括麦克风阵列。
应理解的是,上述传感器可以是一个传感器,也可以是多个传感器,可以是同一类传感器,例如都是图像传感器,可以是利用多类传感器的传感信息,例如,利用图像传感器和声音传感器采集到的图像信息和声音信息共同确定用户的
在一些可能的实现方式中,该多个用户所在的多个区域的位置信息可以包括该多个区域中每个区域的中心点,或者该多个区域中每个区域的预设点,或者在每个区域中,根据预设规则得到的点。
结合第一方面,在第一方面的某些实现方式中,该第一设备根据该多个区域的位置信息与该多个发声装置的位置信息,控制该多个发声装置工作,包括:该第一设备确定声场优化中心点,该声场优化中心点到该多个区域中每个区域的中心点的距离相等;该第一设备根据该声场优化中心点与该多个发声装置中每个发声装置之间的距离,控制该多个发声装置中每个发声装置工作。
本申请实施例中,第一设备在控制多个发声装置工作之前可以先确定当前的声场优化中心点,并根据该声场优化中心点与多个发声装置的之间的距离信息来控制发声装置工作,这样有助于多个用户均可以享受到好的听音效果,有助于提升用户的使用体验。
在一些可能的实现方式中,该第一设备根据该多个区域的位置信息与该多个发声装置的位置信息,控制该多个发声装置工作,包括:根据该多个区域的位置信息与映射关系,控制该多个发声装置工作,该映射关系为多个区域的位置与多个发声装置的播放强度的映射关系。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:该第一设备提示该声场优化中心点的位置信息。
本申请实施例中,通过向用户提示该声场优化中心点的位置信息,在提升多个用户的听音效果的同时也有助于用户明确当前的声场优化中心点。
在一些可能的实现方式中,该第一设备提示该声场优化中心点的位置信息,包括:该第一设备通过人机交互界面HMI或者声音提示声场优化中心点的位置信息。
在一些可能的实现方式中,该第一设备可以为车辆,该第一设备提示该声场优化中心点的位置信息,包括:该车辆通过氛围灯提示声场优化中心点的位置信息。
结合第一方面,在第一方面的某些实现方式中,该多个区域为车辆座舱内的区域。
结合第一方面,在第一方面的某些实现方式中,该多个区域包括前排区域和后排区域。
结合第一方面,在第一方面的某些实现方式中,该多个区域可以包括主驾区域和副驾区域。
在一些可能的实现方式中,该多个区域包括主驾位区域、副驾位区域、二排左侧区域以及二排右侧区域。
在一些可能的实现方式中,该第一设备可以为车辆,该第一设备获取多个用户所在的多个区域的位置信息,包括:该车辆通过每个区域座椅下的压力传感器获取多个用户所在的多个区域的位置信息。
在一些可能的实现方式中,该第一设备中包括麦克风阵列,该第一设备获取多个用户的位置信息,包括:该第一设备通过麦克风阵列获取环境中的语音信号;根据该语音信号确定环境中的多个用户所在的多个区域的位置信息。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:该第一设备提示该多个用户所在的多个区域的位置信息。
在一些可能的实现方式中,该第一设备提示该多个用户所在的多个区域的位置信息,包括:该第一设备通过人机交互界面HMI或者声音提示多个用户所在的多个区域的位置信息。
在一些可能的实现方式中,该第一设备可以为车辆,该第一设备提示该多个用户所在的多个区域的位置信息,包括:该车辆通过氛围灯提示多个用户所在的多个区域的位置信息。
结合第一方面,在第一方面的某些实现方式中,该控制该多个发声装置工作,包括:对该多个发声装置中每个发声装置的播放强度进行调节。
在一些可能的实现方式中,该多个发声装置中每个发声装置的播放强度与每个发声装置与用户之间的距离成正比。
结合第一方面,在第一方面的某些实现方式中,该多个发声装置包括第一发声装置,该第一设备对该多个发声装置中每个发声装置的播放强度进行调节,包括:该第一设备控制该第一发声装置的播放强度为第一播放强度;其中,该方法还包括:该第一设备获取到用户将该第一发声装置的播放强度从该第一播放强度调节至第二播放强度的指令;响应于获取到该指令,该第一设备将该第一发声装置的播放强度调节为该第二播放强度。
本申请实施例中,第一设备在将第一发声装置的播放强度调节为第一播放强度之后,若检测到用户将第一发声装置的播放强度从第一播放强度调节为第二播放强度的操作,可以将第一发声装置的播放强度调节为该第二播放强度。这样可以使得用户快速完成对第一发声装置的播放强度的调节,使得该第一发声装置更好的满足用户的听音效果。
第二方面,提供了一种发声***,该发声***包括传感器、控制器和多个发声装置,其中,该传感器,用于采集数据并向该控制器发送该数据;该控制器,用于根据该数据,获取多个用户所在的多个区域的位置信息;根据该多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作。
结合第二方面,在第二方面的某些实现方式中,该控制器具体用于:根据图像传感器采集的数据,获取该多个区域的位置信息;或者,根据压力传感器采集的数据,获取该多个区域的位置信息;或者,根据声音传感器采集的数据,获取该多个区域的位置信息。
结合第二方面,在第二方面的某些实现方式中,该控制器具体用于:确定声场优化中心点,该声场优化中心点到该多个区域中每个区域的中心点的距离相等;根据该声场优化中心点与该多个发声装置中每个发声装置之间的距离,控制该多个发声装置中每个发声装置工作。
结合第二方面,在第二方面的某些实现方式中,该控制器还用于向第一提示装置发送第一指令,该第一指令用于指示该第一提示装置提示该声场优化中心点的位置信息。
结合第二方面,在第二方面的某些实现方式中,该多个区域为车辆座舱内的区域。
结合第二方面,在第二方面的某些实现方式中,该多个区域包括前排区域和后排区域。
结合第二方面,在第二方面的某些实现方式中,该前排区域包括主驾区域和副驾区域。
结合第二方面,在第二方面的某些实现方式中,该控制器,还用于向第二提示装置发送第二指令,该第二指令用于指示该第二提示装置提示该多个用户所在的多个区域的位置信息。
结合第二方面,在第二方面的某些实现方式中,该控制器具体用于:对该多个发声装置中每个发声装置的播放强度进行调节。
结合第二方面,在第二方面的某些实现方式中,该多个发声装置包括第一发声装置,该控制器具体用于:控制该第一发声装置的播放强度为第一播放强度;该控制器,还用于获取到用户将该第一发声装置的播放强度从该第一播放强度调节至第二播放强度的第三指令;响应于获取到该第三指令,将该第一发声装置的播放强度调节为该第二播放强度。
第三方面,提供了一种电子装置,该电子装置包括:收发单元,用于接收传感信息;处理单元,用于根据该传感信息,获取多个用户所在的多个区域的位置信息;处理单元,还用于根据该多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作。
结合第三方面,在第三方面的某些实现方式中,处理单元,还用于根据该多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作,包括:该处理单元用于:确定声场优化中心点,该声场优化中心点到该多个区域中每个区域的中心点的距离相等;根据该声场优化中心点与该多个发声装置中每个发声装置之间的距离,控制该多个发声装置中每个发声装置工作。
结合第三方面,在第三方面的某些实现方式中,该收发单元,还用于向第一提示单元发送第一指令,该第一指令用于指示该第一提示单元提示该声场优化中心点的位置信息。
结合第三方面,在第三方面的某些实现方式中,该多个区域为车辆座舱内的区域。
结合第三方面,在第三方面的某些实现方式中,该多个区域包括前排区域和后排区域。
结合第三方面,在第三方面的某些实现方式中,该多个区域包括主驾区域和副驾区域。
结合第三方面,在第三方面的某些实现方式中,该收发单元,还用于向第二提示单元发送第二指令,该第二指令用于指示该第二提示单元提示该多个用户所在的多个区域的位置信息。
结合第三方面,在第三方面的某些实现方式中,该处理单元具体用于:对该多个发声装置中每个发声装置的播放强度进行调节。
结合第三方面,在第三方面的某些实现方式中,该多个发声装置包括第一发声装置,该处理单元具体用于:控制该第一发声装置的播放强度为第一播放强度;该收发单元,还用于接收第三指令,该第三指令为指示将该第一发声装置的播放强度从该第一播放强度调节至第二播放强度的指令;该处理单元,还用于将该第一发声装置的播放强度调节为该第二播放强度。
结合第三方面,在第三方面的某些实现方式中,该传感信息包括图像信息、压力信息和声音信息中的一个或多个。
在一些可能的实现方式中,该电子装置可以为芯片、车载装置(例如,控制器)。
在一些可能的实现方式中,该收发单元可以为接口电路。
在一些可能的实现方式中,该处理单元可以为处理器、处理装置等。
第四方面,提供了一种装置,该装置包括用于执行上述第一方面的任意一种实现方式的方法的单元。
第五方面,提供了一种装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行第一方面中任一种可能的方法。
可选地,上述处理单元可以是处理器,上述存储单元可以是存储器,其中存储器可以是芯片内的存储单元(例如,寄存器、缓存等),也可以是车辆内位于芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
第六方面,提供了一种***,该***包括传感器和电子装置,其中,该电子装置可以是第三方面中任意一种可能的实现方式所述的电子装置。
结合第六方面,在第六方面的某些实现方式中,该***还包括多个发声装置。
第七方面,提供了一种***,该***包括多个发声装置和电子装置,其中,该电子装置可以是第三方面中任意一种可能的实现方式所述的电子装置。
结合第七方面,在第七方面的某些实现方式中,该***还包括传感器。
第八方面,提供了一种车辆,该车辆包括上述第二方面中任意一种可能的实现方式所述的发声***,或者,该车辆包括上述第三方面中任意一种可能的实现方式所述的电子装置,或者,该车辆包括第四方面中任意一种可能的实现方式所述的装置,或者,该车辆包括第五方面中任意一种可能的实现方式所述的装置,或者,该车辆包括第六方面中任意一种可能的实现方式所述的***,或者,该车辆包括第七方面中任意一种可能的实现方式所述的***。
第九方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面中的方法。
需要说明的是,上述计算机程序代码可以全部或者部分存储在第一存储介质上,其中第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装,本申请实施例对此 不作具体限定。
第十方面,提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面的方法。
附图说明
图1是本申请实施例提供的车辆的功能框图示意。
图2是本申请实施例提供的一种发声***的结构示意图。
图3是本申请实施例提供的一种发声***的另一结构示意图。
图4是一种车辆的俯视图。
图5是本申请实施例提供的发声装置的控制方法的示意性流程图。
图6是车辆座舱内4个扬声器的位置示意图。
图7是本申请实施例提供的当主驾位、副驾位、二排左侧和二排右侧区域均有用户时车内扬声器的声场优化中心的示意图。
图8是本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。
图9是申请实施例提供的当主驾位、副驾位和二排左侧区域有用户时车内扬声器的声场优化中心的示意图。
图10是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图11是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图12是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图13是本申请实施例提供的当主驾位和二排左侧区域有用户时车内扬声器的声场优化中心的示意图。
图14是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图15是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图16是本申请实施例提供的当主驾位和副驾位有用户时车内声场优化中心的示意图。
图17是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图18是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图19是本申请实施例提供的当主驾位和二排左有用户时车内声场优化中心的示意图。
图20是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图21是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图22是本申请实施例提供的当主驾位有用户时车内声场优化中心的示意图。
图23是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图24是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图25是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图26是本申请实施例提供的车辆的中控大屏显示声场优化中心的另一示意图。
图27是本申请实施例提供的一组图像用户界面GUI。
图28是本申请实施例提供的另一组GUI。
图29是本申请实施例的发声装置的控制方法应用于家庭影院时的示意图。
图30是本申请实施例提供的家庭影院下的声场优化中心的示意图。
图31是本申请实施例提供的发声装置的控制方法的另一示意性流程图。
图32是本申请实施例提供的发声***的示意性结构图。
图33是本申请实施例提供的装置的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
图1是本申请实施例提供的车辆100的一个功能框图示意。可以将车辆100配置为完全或部分自动驾驶模式。例如:车辆100可以通过感知***120获取其周围的环境信息,并基于对周边环境信息的分析得到自动驾驶策略以实现完全自动驾驶,或者将分析结果呈现给用户以实现部分自动驾驶。
车辆100可包括各种子***,例如信息娱乐***110、感知***120、决策控制***130、驱动***140以及计算平台150。可选地,车辆100可包括更多或更少的子***,并且每个子***都可包括多个部件。另外,车辆100的每个子***和部件可以通过有线或者无线的方式实现互连。
在一些实施例中,信息娱乐***110可以包括通信***111,娱乐***112以及导航***113。
通信***111可以包括无线通信***,无线通信***可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信***146可使用第三代(3 rd generation,3G)蜂窝通信,例如码分多址接入(code division multiple access,CDMA)、演进数据优化(evolution data optimized,EVDO)、全球移动通信***(global system for mobile communication,GSM)、通用分组无线业务(general packet radio service,GPRS),或者***(4 th generation,4G)蜂窝通信,例如长期演进(long term evolution,LTE)。或者第五代(5 th generation,5G)蜂窝通信。无线通信***可利用Wi-Fi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信***146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信***,例如,无线通信***可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边台站之间的公共和/或私有数据通信。
娱乐***112可以包括中控屏,麦克风和音响,用户可以基于娱乐***在车内收听广播,播放音乐;或者将手机和车辆联通,在中控屏上实现手机的投屏,中控屏可以为触控式,用户可以通过触摸屏幕进行操作。在一些情况下,可以通过麦克风获取用户的语音信号,并依据对用户的语音信号的分析实现用户对车辆100的某些控制,例如调节车内温度等。在另一些情况下,可以通过音响向用户播放音乐。
导航***113可以包括由地图供应商所提供的地图服务,从而为车辆100提供行驶路线的导航,导航***113可以和车辆的全球定位***121、惯性测量单元122配合使用。地图供应商所提供的地图服务可以为二维地图,也可以是高精地图。
感知***120可包括感测关于车辆100周边的环境的信息的若干种传感器。例如,感知***120可包括全球定位***121(全球定位***可以是GPS***,也可以是北斗***或者其他定位***)、惯性测量单元(inertial measurement unit,IMU)122、激光雷达 123、毫米波雷达124、超声雷达125以及摄像装置126。感知***120还可包括被监视车辆100的内部***的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是车辆100的安全操作的关键功能。
全球定位***121可用于估计车辆100的地理位置。
惯性测量单元122用于基于惯性加速度来感测车辆100的位置和朝向变化。在一些实施例中,惯性测量单元122可以是加速度计和陀螺仪的组合。
激光雷达123可利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光雷达123可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他***组件。
毫米波雷达124可利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达126还可用于感测物体的速度和/或前进方向。
超声雷达125可以利用超声波信号来感测车辆100周围的物体。
摄像装置126可用于捕捉车辆100的周边环境的图像信息。摄像装置126可以包括单目相机、双目相机、结构光相机以及全景相机等,摄像装置126获取的图像信息可以包括静态图像,也可以包括视频流信息。
决策控制***130包括基于感知***120所获取的信息进行分析决策的计算***131,决策控制***130还包括对车辆100的动力***进行控制的整车控制器132,以及用于控制车辆100的转向***133、油门134和制动***135
计算***131可以操作来处理和分析由感知***120所获取的各种信息以便识别车辆100周边环境中的目标、物体和/或特征。所述目标可以包括行人或者动物,所述物体和/或特征可包括交通信号、道路边界和障碍物。计算***131可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪等技术。在一些实施例中,计算***131可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。计算***131可以将所获取的各种信息进行分析并得出对车辆的控制策略。
整车控制器132可以用于对车辆的动力电池和引擎141进行协调控制,以提升车辆100的动力性能。
转向***133可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘***。
油门134用于控制引擎141的操作速度并进而控制车辆100的速度。
制动***135用于控制车辆100减速。制动***135可使用摩擦力来减慢车轮144。在一些实施例中,制动***135可将车轮144的动能转换为电流。制动***135也可采取其他形式来减慢车轮144转速从而控制车辆100的速度。
驱动***140可包括为车辆100提供动力运动的组件。在一个实施例中,驱动***140可包括引擎141、能量源142、传动***143和车轮144。引擎141可以是内燃机、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎141将能量源142转换成机械能量。
能量源142的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源142也可以为车辆100的其 他***提供能量。
传动装置143可以将来自引擎141的机械动力传送到车轮144。传动装置143可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置143还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
车辆100的部分或所有功能受计算平台150控制。计算平台150可包括至少一个处理器151,处理器151可以执行存储在例如存储器152这样的非暂态计算机可读介质中的指令153。在一些实施例中,计算平台150还可以是采用分布式方式控制车辆100的个体组件或子***的多个计算设备。
处理器151可以是任何常规的处理器,诸如商业可获得的CPU。替选地,处理器151还可以包括诸如图像处理器(graphic process unit,GPU),现场可编程门阵列(field programmable gate array,FPGA)、片上***(system on chip,SOC)、专用集成芯片(application specific integrated circuit,ASIC)或它们的组合。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机110的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机110的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,存储器152可包含指令153(例如,程序逻辑),指令153可被处理器151执行来执行车辆100的各种功能。存储器152也可包含额外的指令,包括向信息娱乐***110、感知***120、决策控制***130驱动***140中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令153以外,存储器152还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算平台150使用。
计算平台150可基于从各种子***(例如,驱动***140、感知***120和决策控制***130)接收的输入来控制车辆100的功能。例如,计算平台150可利用来自决策控制***130的输入以便控制转向***133来避免由感知***120检测到的障碍物。在一些实施例中,计算平台150可操作来对车辆100及其子***的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,存储器152可以部分或完全地与车辆100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
在道路行进的自动驾驶汽车,如上面的车辆100,可以识别其周围环境内的物体以确 定对当前速度的调整。所述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶汽车所要调整的速度。
可选地,车辆100或者与车辆100相关联的感知和计算设备(例如计算***131、计算平台150)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所述识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。车辆100能够基于预测的所述识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定车辆100的速度,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
除了提供调整自动驾驶汽车的速度的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车等,本申请实施例不做特别的限定。
随着人民生活水平的提高,车辆已经成为人们出行的重要交通工具。在车辆行驶或者等待的时候,人们喜欢听听音乐和广播,有时候还会看看电影,刷刷短视频。因此,车内的声场效果已经成为大家关注的重要因素,好的声音效果总能给人带来舒适的体验。
当前用户需要通过手动调节来控制各个扬声器的播放强度,从而达到目标点位置的声场最优。如果需要驾驶员手动去调节,就需要驾驶员的视线转移到屏幕上。这对于驾驶来说,存在一定安全隐患。与此同时,当车内乘客换位置或者数量增加的时候,需要不断的去手动调节各个扬声器的播放强度,这样会导致用户的体验不好。
本申请实施例提供了一种发声装置的控制方法、发声***以及车辆,通过识别用户所在的区域的位置信息,自动调节声场优化中心,从而使得每一个用户都能达到一个好的听音效果。
下面通过图2和图3介绍本申请实施例提供车载发声***。图2示出了本申请实施例提供的一种发声***的结构示意图。该发声***可以为控制器局域网络(controller area network,CAN)控制***,该CAN控制***可以包括多个传感器(例如,传感器1、传感器2等),多个电子控制单元(electronic control unit,ECU)、车载娱乐主机、扬声器控制器以及扬声器。其中,传感器包括但不限于摄像头、麦克风、超声波雷达、毫米波雷达、激光雷达、车速传感器、电机功率传感器以及发动机转速传感器等;ECU用于接收传感器采集的数据并执行相应命令,在执行相应命令后得到周期信号或者事件信号,进而ECU可以将这些信号发到公CAN网络上,其中ECU包括但不限于整车控制器、混合动力控制器、自动变速箱控制器以及自动驾驶控制器等;车载娱乐主机用于抓取公CAN网络上的各个ECU发出的周期信号或者事件信号,在识别到对应的信号状态下执行相应的操作或者转发信号给扬声器控制器;扬声器控制器用于收取私CAN网络上来自车载娱乐主机的命令信号,对扬声器进行调节。示例性的,本申请实施例中,车载娱乐主机可以从 CAN总线上抓取摄像头采集的图像信息。车载娱乐主机可以通过图像信息判断车内多个区域中是否存在用户,并将用户的位置信息发送给扬声器控制器。扬声器控制器可以根据用户的位置信息,控制各个扬声器的播放强度。
图3示出了本申请实施例提供的一种车内发声***的另一结构示意图。该发声***可以为环网通信架构,所有传感器与执行器(例如,扬声器、氛围灯、空调、电机等得到命令并执行命令的部件)可以就近接入车辆集成单元(vehicle integration unit,VIU)。其中,VIU作为通信接口单元,可以部署于车辆传感器与执行器密集位置,使得车辆的传感器与执行器可以进行就近接入;同时,VIU可以具备一定计算与驱动能力(例如,VIU可以吸纳部分执行器的驱动计算功能);传感器包括但不限于摄像头、麦克风、超声波雷达、毫米波雷达、激光雷达、车速传感器、电机功率传感器以及发动机转速传感器等。
VIU之间相互向组网通信,智能驾驶计算平台/移动数据中心(mobile data center,MDC)、整车域控制器(vehicle domain controller,VDC)以及智能座舱域控制器(cockpit domain controller,CDC)分别冗余接入由VIU组成的环网通信网络。当传感器(例如,摄像头)采集到数据后,传感器可以将采集的数据发送给VIU。VIU可以将数据发布到环网上,由MDC、VDC以及CDC在环网上收集相关数据,加以计算后转化为包括用户位置信息的信号并发布到环网上。通过VIU中相应的计算能力以及驱动能力,控制扬声器的播放强度。
应理解,如图3所示,不同的VIU可以与不同位置的扬声器具有对应关系。例如,VIU1用于驱动扬声器1,VIU2用于驱动扬声器2,VIU3用于驱动扬声器3,VIU4用于驱动扬声器4。VIU的布置形式可以是与扬声器无关,例如,VIU1可以布置于车辆的左后方,而扬声器1可以布置于主驾侧的车门附近。传感器或者执行器可以选择就近的VIU接入,从而节省线束。由于MDC、VDC以及CDC的接口数量的限制,可以由VIU来承担多个传感器以及多个执行器的接入,从而起到接口和通信的功能。
还应理解,本申请实施例中,传感器或者控制器在哪一个VIU接入、由哪一个控制器控制可以是发声***出厂时设定好的,或者,也可以是由用户定义的,其硬件可更换可升级。
还应理解,VIU可以吸纳部分传感器与执行器的驱动计算功能,这样当某些执行器(例如,CDC、VDC)发生故障时,VIU可以直接对传感器采集的数据进行处理,进而对执行器进行控制。
一个实施例中,图3所示的通信架构可以是智能数字汽车平台(intelligent digital vehicle platform,IDVP)环网通信架构。
图4示出了一种车辆的俯视图。如图4所示,位置1为主驾位,位置2为副驾位,位置3-5为后排区域,位置6a-6d为车内的4个扬声器所在的位置,位置7为车内摄像头所在的位置,位置8为CDC以及车载中控屏所在的位置。其中,扬声器可以用于播放车内的媒体声音;车内摄像头可以用于检测车内乘客的位置;车载中控屏可以用于显示图像信息以及应用程序的界面;CDC用于连接各个外设,同时提供数据的分析和处理能力。
应理解,图4中仅仅是以位置6a-6d所示的扬声器分别位于主驾位车门附近、副驾位车门附近、二排左侧车门附近以及二排右侧车门附近为例进行说明的。本申请实施例中,对于扬声器的位置并不作具体限定。扬声器也可以位于车辆的车门附近、中控大屏附近、 顶棚、地板以及座椅(例如,座椅的枕头上)。
图5示出了本申请实施例提供的发声装置的控制方法500的示意性流程图。该方法500可以应用于车辆中,该车辆中包括多个发声装置(例如,扬声器),该方法500包括:
S501,车辆获取用户的位置信息。
一个实施例中,车辆可以通过启动车内摄像头来获取车内各个区域(例如主驾位、副驾位以及后排区域)的图像信息,并通过各个区域中的图像信息来确定各个区域中是否有用户。例如,车辆可以通过摄像头采集的图像信息分析其中包含人脸的轮廓,从而车辆可以确定该区域中是否存在用户。又例如,车辆可以通过摄像头采集的图像信息分析其中包含人眼的虹膜信息,从而车辆可以确定该区域中存在用户。
一个实施例中,当车辆检测到用户打开声场自适应开关的操作时,车辆可以启动摄像头来获取车内各个区域的图像信息。
示例性的,用户可以通过中控大屏选择设置选项并进入音效功能界面,在音效功能界面可以选择打开声场自适应开关。
一个实施例中,车辆还可以通过座椅下的压力传感器来检测当前区域中是否存在用户。示例性的,当某个区域的座椅上的压力传感器检测到的压力值大于或者等于预设值时,可以确定该区域中存在用户。
一个实施例中,车辆还可以通过麦克风阵列获取的音频信息来确定声源位置信息,从而可以确定哪些区域中存在用户。
一个实施例中,车辆获取车内用户的位置信息还可以通过上述车内摄像头、压力传感器以及麦克分阵列中的一种或者多种的组合来实现。
应理解,本申请实施例中传感器(例如,车内摄像头、压力传感器或者麦克风阵列)采集的数据可以传输至CDC,CDC可以通过对数据进行处理来确定哪些区域中存在用户。
示例性的,CDC在对数据进行处理后可以将其转化为标示位,例如,当仅有主驾位有人时CDC可以输出1000;当仅有副驾位有人时CDC可以输出0100;当仅有二排左侧区域有人时CDC可以输出0010;当主驾位和副驾位都人时CDC可以输出1100;当主驾位、副驾位和二排左侧有人时可以输出1110。
应理解,上述CDC输出的位置信息的方式仅仅是以标示位为例进行说明的,本申请实施例并不限于此。
还应理解,上述对车内用户乘坐的区域划分为主驾位、副驾位、二排左和二排右为例进行说明的。本申请实施例并不限于此。例如,也可以将车内区域划分为主驾位、副驾位、二排左、二排中和二排右;又例如,对于7座SUV,还可以将车内区域划分为主驾位、副驾位、二排左、二排右、三排左和三排右。又例如,对于客车而言,可以将车内区域划分为前排区域和后排区域。或者,对于多人客车而言,可以将车内的区域划分为驾驶区域,乘客区域等。
S502,车辆根据用户的位置信息,对发声装置进行调节。
示例性的,以发声装置为扬声器为例,结合图2中的扬声器6a-6d进行说明。图6示出了4个扬声器的位置。以4个扬声器所在的点的连线形成的图形是矩形ABCD为例,该矩形ABCD上的A点设置有扬声器1,B点设置有扬声器2,C点设置有扬声器3以及D点设置有扬声器4。O点为矩形ABCD的中心点(O点到A、B、C、D四个点的距 离相等)。应理解的是,不同汽车的扬声器位置和数量不同,在具体实施过程中,可以根据汽车的型号或汽车中扬声器的设置,来设计具体的调节方式,本申请不作限定。
图7示出了本申请实施例提供的当主驾位、副驾位、二排左侧和二排右侧区域均有用户时车内扬声器的声场优化中心的示意图。每个区域的中心点可以组成一个矩形EFGH,矩形EFGH的中心点可以为Q点。Q点可以为当前车内的声场优化中心点。
示例性的,Q点可以是和O点重合。由于矩形EFGH的中心点Q距离四个扬声器的距离是相等的,那么此时车辆可以控制四个扬声器的播放强度相同(例如,4个扬声器的播放强度均为p)。
示例性的,若Q点和O点不重合,则车辆可以根据Q点距离4个扬声器的距离来控制4个扬声器的播放强度。
例如,对于扬声器1(位于A点),车辆可以控制扬声器1的播放强度为
Figure PCTCN2022102818-appb-000001
又例如,对于扬声器2(位于B点),车辆可以控制扬声器2的播放强度为
Figure PCTCN2022102818-appb-000002
又例如,对于扬声器3(位于C点),车辆可以控制扬声器3的播放强度为
Figure PCTCN2022102818-appb-000003
又例如,对于扬声器4(位于D点),车辆可以控制扬声器4的播放强度为
Figure PCTCN2022102818-appb-000004
图8示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的主驾位、副驾位、二排左侧和二排右侧区域有用户时,可以通过中控大屏提示用户“检测到主驾位、副驾位、二排左和二排右有人”,同时提示用户当前的声场优化中心点可以是距离主驾位所在区域的中心点、副驾位所在区域的中心点、二排左所在区域的中心点和二排右所在区域的中心点距离相等的点。
图9示出了本申请实施例提供的当主驾位、副驾位和二排左侧区域有用户时车内扬声器的声场优化中心的示意图。主驾位、副驾位和二排左侧所在区域的中心点可以组成一个三角形EFG,其中三角形EFG的外心可以为Q点。Q点可以为当前车内的声场优化中心点。
示例性的,Q点可以是和O点重合。由于矩形EFGH的中心点Q距离四个扬声器的距离是相等的,那么此时车辆可以控制四个扬声器的播放强度相同(例如,4个扬声器的播放强度均为p)。
示例性的,Q点和O点也可以是不重合的。此时车辆控制4个扬声器的播放强度的方式可以参考上述实施例中的描述,此处不再赘述。
图10示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的主驾位、副驾位和二排左侧区域有用户时,可以通过中控大屏提示用户“检测到主驾位、副驾位和二排左有人”,同时提示用户当前的声场优化中心点可以是距离主驾位所在区域的中心点、副驾位所在区域的中心点和二排左所在区域的中心点距离相等的点。
图11示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的主驾位、副驾位和二排右侧区域有用户时,可以通过中控大屏提示用户“检测到主驾位、副驾位和二排右有人”,同时提示用户当前的声场优化中心点可以是距离主 驾位所在区域的中心点、副驾位所在区域的中心点和二排右所在区域的中心点距离相等的点。
图12示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的主驾位、二排左和二排右侧区域有用户时,可以通过中控大屏提示用户“检测到主驾位、二排左和二排右有人”,同时提示用户当前的声场优化中心点可以是距离主驾位所在区域的中心点、二排左所在区域的中心点和二排右所在区域的中心点距离相等的点。
应理解,当主驾位、副驾位和二排右侧区域有用户,或者,当主驾位、二排左和二排右侧区域有用户时,车辆控制4个扬声器的播放强度的方式可以参考上述当主驾位、副驾位和二排左侧区域有用户时车辆控制4个扬声器的播放强度的方式,此处不再赘述。
图13示出了本申请实施例提供的当主驾位和二排左侧区域有用户时车内扬声器的声场优化中心的示意图。主驾位和二排左侧所在区域的中心点的连线为EG,其中EG的中点可以为Q点。Q点可以为当前车内的声场优化中心点。
示例性的,Q点可以是和O点重合。由于线段EH的中点距离4个扬声器的距离是相等的,那么此时车辆可以控制四个扬声器的播放强度相同(例如,4个扬声器的播放强度均为p)。
示例性的,若Q点和O点不重合,则车辆可以根据Q点距离4个扬声器的距离来控制4个扬声器的播放强度。具体的控制过程可以参考上述实施例中的描述,此处不再赘述。
图14示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的主驾位和二排右侧区域均有用户时,可以通过中控大屏提示用户“检测到主驾位和二排右有人”,同时提示用户当前的声场优化中心点可以是距离主驾位所在区域的中心点和二排右所在区域的中心点距离相等的点。
图15示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的副驾位和二排左侧区域有用户时,可以通过中控大屏提示用户“检测到副驾位和二排左有人”,同时提示用户当前的声场优化中心点可以是距离副驾位所在区域的中心点和二排左所在区域的中心点距离相等的点。
图16示出了本申请实施例提供的当主驾位和副驾位有用户时车内声场优化中心的示意图。主驾位和副驾位所在区域的中心点的连线为EF,其中EF的中点可以为P点。P点可以为当前车内声场优化中心点。
图17示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的主驾位和副驾位有用户时,可以通过中控大屏提示用户“检测到主驾位和副驾位有人”,同时提示用户当前的声场优化中心点可以是距离主驾位所在区域的中心点和副驾位所在区域的中心点距离相等的点。
示例性的,车辆可以根据P点距离4个扬声器的距离来控制4个扬声器的播放强度。
例如,对于扬声器1(位于A点),车辆可以控制扬声器1的播放强度为
Figure PCTCN2022102818-appb-000005
又例如,对于扬声器2(位于B点),车辆可以控制扬声器2的播放强度为
Figure PCTCN2022102818-appb-000006
又例如,对于扬声器3(位于C点),车辆可以控制扬声器3的播放强度为
Figure PCTCN2022102818-appb-000007
又例如,对于扬声器4(位于D点),车辆可以控制扬声器4的播放强度为
Figure PCTCN2022102818-appb-000008
图18示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的二排左侧区域和二排右侧区域有用户时,可以通过中控大屏提示用户“检测到二排左和二排右有人”,同时提示用户当前的声场优化中心点可以是距离二排左所在区域的中心点和二排右所在区域的中心点距离相等的点。
图19示出了本申请实施例提供的当主驾位和二排左有用户时车内声场优化中心的示意图。主驾位和二排左所在区域的中心点的连线为EH,其中EH的中点可以为R点。R点可以为当前车内的声场优化中心点。
图20示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的主驾位和二排左侧区域有用户时,可以通过中控大屏提示用户“检测到主驾位和二排左有人”,同时提示用户当前的声场优化中心点可以是距离主驾位所在区域的中心点和二排左所在区域的中心点距离相等的点。
示例性的,车辆可以根据R点距离4个扬声器的距离来控制4个扬声器的播放强度。
例如,对于扬声器1(位于A点),车辆可以控制扬声器1的播放强度为
Figure PCTCN2022102818-appb-000009
又例如,对于扬声器2(位于B点),车辆可以控制扬声器2的播放强度为
Figure PCTCN2022102818-appb-000010
又例如,对于扬声器3(位于C点),车辆可以控制扬声器3的播放强度为
Figure PCTCN2022102818-appb-000011
又例如,对于扬声器4(位于D点),车辆可以控制扬声器4的播放强度为
Figure PCTCN2022102818-appb-000012
图21示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的副驾位和二排右侧区域有用户时,可以通过中控大屏提示用户“检测到副驾位和二排右有人”,同时提示用户当前的声场优化中心点可以是距离副驾位所在区域的中心点和二排右所在区域的中心点距离相等的点。
图22示出了本申请实施例提供的当主驾位有用户时车内声场优化中心的示意图。主驾位所在区域的中心点为E点,其中E点可以为当前车内的声场优化中心点。
图23示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的主驾位有用户时,可以通过中控大屏提示用户“检测到主驾位有人”,同时提示用户当前的声场优化中心点可以是主驾位所在区域的中心点。
示例性的,车辆可以根据E点距离4个扬声器的距离来控制4个扬声器的播放强度。
例如,对于扬声器1(位于A点),车辆可以控制扬声器1的播放强度为
Figure PCTCN2022102818-appb-000013
又例如,对于扬声器2(位于B点),车辆可以控制扬声器2的播放强度为
Figure PCTCN2022102818-appb-000014
又例如,对于扬声器3(位于C点),车辆可以控制扬声器3的播放强度为
Figure PCTCN2022102818-appb-000015
又例如,对于扬声器4(位于D点),车辆可以控制扬声器4的播放强度为
Figure PCTCN2022102818-appb-000016
图24示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆 在检测到车内的副驾位有用户时,可以通过中控大屏提示用户“检测到副驾位有人”,同时提示用户当前的声场优化中心点可以是副驾位所在区域的中心点。
图25示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的二排左侧区域有用户时,可以通过中控大屏提示用户“检测到二排左有人”,同时提示用户当前的声场优化中心点可以是二排左所在区域的中心点。
图26示出了本申请实施例提供的车辆的中控大屏显示声场优化中心的示意图。车辆在检测到车内的二排右侧区域有用户时,可以通过中控大屏提示用户“检测到二排右有人”,同时提示用户当前的声场优化中心点可以是二排右所在区域的中心点。
应理解,以上以S501中用户的位置信息为用户所在区域的中心点为例进行说明的,本申请实施例中并不限于此。例如,用户的位置信息还可以是用户所在区域的其他预设点,或者,用户的位置信息还可以是根据预设规则(例如,预设算法)计算得到的区域中的某个点。
一个实施例中,用户的位置信息也可以是根据用户人耳的位置信息确定的。通过摄像装置采集的图像信息可以确定用户人耳的位置信息。例如,用户的人耳位置信息为第一点和第二点连线的中点,其中第一点为用户左耳上的某个点,第二点为用户右耳上的某个点。再例如,用户的人耳的耳廓位置信息等。可以根据用户的人耳的位置信息或者人耳的耳廓的位置信息,确定区域的位置信息。下面结合图27和28介绍当车辆通过用户的位置信息调节了多个发声装置的播放强度后,用户手动调整某个扬声器的播放强度的过程。
图27示出了本申请实施例提供的一组图像用户界面(graphical user interface,GUI)。
如图27中的(a)所示,当主驾位、副驾位、二排左侧区域以及二排右侧区域均有用户时,车辆可以通过HMI提示用户“检测到主驾位、副驾位、二排左和二排右有人”以及提示用户当前声场优化中心。如图27中的(a)所示,主驾位、副驾位、二排左以及二排右上的笑脸表示该区域中有用户。当车辆检测到用户长按二排左侧区域中的笑脸的操作时,可以通过HMI显示图标2701(例如,垃圾箱图标)。当车辆检测到用户将二排左侧区域中的笑脸拖动至图标2701处时,车辆可以通过HMI显示如图27中的(b)所示的GUI。
如图27中的(b)所示,相应于检测到用户将二排左侧区域中的笑脸拖动至图标2701处的操作,车辆可以通过HMI提示用户“已经为您将二排左侧区域的扬声器音量降为0”。
一个实施例中,若主驾位、副驾位、二排左以及二排右上有用户,那么当前4个扬声器的播放强度可以为p。当车辆检测到用户将二排左侧区域中的笑脸拖动至图标2701处的操作时,车辆可以将二排左侧区域的扬声器播放强度降为0,或者,从p降低为0.1p,本申请实施例对此不作限定。
图28示出了本申请实施例提供的一组GUI。
如图28中的(a)所示,当主驾位、副驾位、二排左侧区域以及二排右侧区域均有用户时,车辆可以通过HMI提示用户“检测到主驾位、副驾位、二排左和二排右有人”以及提示用户当前声场优化中心。当车辆在HMI检测到用户的手指在二排左侧区域中向上滑动的操作时,可以显示播放强度的滚动条2801。该播放强度的滚动条2801中可以包括滚动块2802。
如图28中的(b)所示,响应于检测到用户的手指在二排左侧区域中向上滑动的操作, 车辆可以调高二排左侧区域附近的扬声器的播放强度且在HMI上显示滚动块2802向上移动。示例性的,二排左侧区域附近的扬声器的播放强度可以从p提升至1.5p。同时,车辆可以通过HMI提示用户“已经为您将二排左侧区域的扬声器音量提高”。
本申请实施例中,车辆在将第一发声装置的播放强度调节为第一播放强度之后,若检测到用户将某个扬声器的播放强度从第一播放强度调节为第二播放强度的操作,可以将该扬声器的播放强度调节为该第二播放强度。这样可以使得用户快速完成对该区域的扬声器的播放强度进行调节,使得该区域的扬声器更好的满足用户的听音效果。
一个实施例中,车辆还可以通过摄像头采集的图像信息来确定某个区域中用户的状态,从而结合该区域的位置信息以及用户的状态来调节该区域附近的扬声器的播放强度。示例性的,当车辆检测到二排左侧区域有用户且该用户正在休息时,车辆可以控制二排左侧区域附近的扬声器的播放强度为0或者其他数值。
一个实施例中,该第二播放强度也可以为默认播放强度(例如,第二播放强度为0)。当车辆检测到用户在中控大屏上的某个区域(例如二排左侧区域)的预设操作时,车辆可以将该区域的扬声器的播放强度从第一播放强度调节为该默认播放强度。
一个实施例中,该预设操作包括但不限于在该区域中检测到用户的长按操作(例如,长按二排左侧区域中的座位)、在该区域中的滑动或者点击操作等。
应理解,以上结合图6至图28介绍了本申请实施例提供的发声装置的控制方法应用于车内场景时的情况,该控制方法还可以用于其他场景。例如,家庭影院以及KTV的场景中。图29示出了本申请实施例的发声装置的控制方法应用于家庭影院时的示意图。如图29所示,家庭影院中可以包括音箱1、音箱2以及音箱3。家庭影院中的发声***可以通过检测用户与3个音箱之间的位置关系,对3个音箱进行调节。
图30示出了本申请实施例提供的家庭影院下的声场优化中心的示意图。示例性的,以3个音箱所在的点的连线形成的图形是三角形ABC为例,该三角形ABC上的A点设置有音箱1,B点设置有音箱2以及C点设置有音箱3,其中,O点为三角形ABC的外心。
当用户所在区域的中心点和O点重合时,或者,O点位于用户所在区域中时,发声***可以控制音箱1、音箱2和音箱3的播放强度相同(例如,3个音箱的播放强度均为p)。
当用户所在区域的中心点和O点不重合时,或者,O点不位于用户所在区域中时,发声***可以根据用户所在区域与3个音箱之间的位置关系来调节3个音响的播放强度。
例如,以用户所在区域的中心点为Q点为例,对于音箱1(位于A点),发声***可以控制音箱1的播放强度为
Figure PCTCN2022102818-appb-000017
又例如,对于音箱2(位于B点),发声***可以控制音箱2的播放强度为
Figure PCTCN2022102818-appb-000018
又例如,对于音箱3(位于C点),发声***可以控制音箱3的播放强度为
Figure PCTCN2022102818-appb-000019
图31示出了本申请实施例提供的一种发声装置的控制方法3100的示意性流程图。该方法3100可以应用于第一设备中。如图31所示,该方法3100包括:
S3101,该第一设备获取多个用户所在的多个区域的位置信息。
可选地,该第一设备获取多个用户所在的多个区域的位置信息,包括:获取传感信息; 根据传感信息,确定该多个区域的位置信息,其中,该传感信息包括图像信息、压力信息和声音信息中的一个或者多个。
示例性的,该传感器信息可以包括图像信息。该第一设备可以通过图像传感器获取图像信息。
例如,以第一设备为车辆为例,车辆可以通过摄像装置采集的图像信息来判断该图像信息中是否包括人脸的轮廓信息、人耳信息或者虹膜信息等。当车辆需要判断主驾区域是否存在用户时,车辆可以获取驾驶员摄像头采集的主驾区域中的图像信息。若车辆可以确定该图像信息中包括人脸的轮廓信息、人耳信息或者虹膜信息中的一种或者多种,那么第一设备可以确定主驾区域中存在用户。
应理解,本申请实施例中对确定图像信息中包括人脸的轮廓信息、人耳信息或者虹膜信息中的一种或者多种的实现过程并不作限定。例如,车辆可以将图像信息输入神经网络中,从而得到该区域中包括用户的人脸这一分类结果。
又例如,车辆还可以为主驾区域建立坐标系。当车辆需要判断主驾区域是否有人时,可以通过驾驶员摄像头采集该坐标系下多个坐标点的图像信息,进而分析该多个坐标点处是否存在人的特征信息。若存在人的特征信息,那么车辆可以确定主驾区域存在用户。
可选地,该第一设备为车辆,该传感信息可以为压力信息。例如,该车辆中每个座椅下包括压力传感器,该第一设备获取多个用户所在的多个区域的位置信息,包括:该第一设备通过压力传感器采集的压力信息(例如,压力值),获取多个用户所在的多个区域的位置信息。
可选地,当压力传感器采集到的压力值大于或者等于第一阈值时,确定该压力传感器对应的区域内有用户。例如,当主驾区域座椅下的压力传感器检测到的压力值大于或者等于预设压力值时,车辆可以确定主驾区域内有用户。
可选地,该传感信息可以为声音信息。该第一设备获取多个用户所在的多个区域的位置信息,包括:该第一设备通过麦克风阵列采集的声音信号,获取该多个用户所在的多个区域的位置信息。例如,第一设备可以对麦克风阵列采集的声音信号,对用户进行定位。若第一设备根据声音信号定位到用户位于某个区域中,那么第一设备可以确定该区域中有用户。
可选地,该第一设备还可以结合图像信息、压力信息以及声音信息中的至少两种来确定区域中是否存在用户。
例如,以第一设备为车辆为例。当车辆需要确定主驾区域是否存在用户时,可以获取主驾摄像头采集的图像信息以及主驾位座椅中的压力传感器采集的压力信息。若通过主驾摄像头采集的图像信息确定该图像信息中包括人脸信息且主驾位座椅中的压力传感器采集的压力值大于或者等于第一阈值,那么车辆可以确定主驾区域中存在用户。
又例如,当第一设备需要确定某个区域是否存在用户时,可以获取摄像头采集该区域中的图像信息以及通过麦克风阵列对环境中的声音信息进行拾取。若通过摄像头采集到的该区域的图像信息确定该图像信息中包括人脸信息且通过麦克风阵列采集的声音信息确定声音是来自于该区域,那么车辆可以确定该区域中存在用户。
S3102,该第一设备根据该多个用户所在的多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作。
可选地,该多个用户所在的多个区域的位置信息可以包括该多个区域中每个区域的中心点,或者该多个区域中每个区域的预设点,或者在每个区域中,根据预设规则(例如,预设算法)得到的点。
可选地,该第一设备根据该多个区域的位置信息与该多个发声装置的位置信息,控制该多个发声装置工作,包括:根据该多个区域的位置信息与映射关系,控制该多个发声装置工作,该映射关系为多个区域的位置与多个发声装置的播放强度的映射关系。
示例性的,以图4所示的车辆为例。表1示出了多个区域的位置与多个发声装置的播放强度的映射关系。
表1
Figure PCTCN2022102818-appb-000020
应理解,以上表1所示的位置与多个发声装置的播放强度的映射关系仅仅是示意性的,本申请实施例中对区域的划分方式以及扬声器的播放强度不作限定。
可选地,该第一设备根据该多个区域的位置信息与该多个发声装置的位置信息,控制该多个发声装置工作,包括:该第一设备确定声场优化中心点,该声场优化中心点到该多个区域中每个区域的中心点的距离相等;该第一设备根据该声场优化中心点与该多个发声装置中每个发声装置之间的距离,控制该多个发声装置中每个发声装置工作。
可选地,该方法还包括:该第一设备提示该声场优化中心点的位置信息。
可选地,该第一设备通过HMI、声音或者氛围灯提示当前声场优化中心的位置信息。
可选地,该多个区域为车辆座舱内的区域。
可选地,该多个区域包括前排区域和后排区域。可选地,该多个区域可以包括主驾区域和副驾区域。
可选地,该方法还包括:该第一设备提示该多个用户所在的多个区域的位置信息。
可选地,该第一设备控制该多个发声装置工作,包括:
该第一设备对该多个发声装置中每个发声装置的播放强度进行调节。
可选地,该多个发声装置包括第一发声装置,该第一设备对该多个发声装置中每个发声装置的播放强度进行调节,包括:该第一设备控制该第一发声装置的播放强度为第一播放强度;其中,该方法还包括:该第一设备获取到用户将该第一发声装置的播放强度从该第一播放强度调节至第二播放强度的指令;响应于获取到该指令,该第一设备将该第一发声装置的播放强度调节为该第二播放强度。
示例性的,如图27中的(b)所示,车辆中主驾位、副驾位、二排左以及二排右侧区域均有用户,此时车辆可以控制4个扬声器的播放强度为p。当车辆检测到用户在HMI上将二排左侧区域的笑脸拖拽至图标2701处时,车辆可以将二排左侧区域附近的扬声器的播放强度从p调整为0。
示例性的,如图28中的(b)所示,车辆中主驾位、副驾位、二排左以及二排右侧区域均有用户,此时车辆可以控制4个扬声器的播放强度为p。当车辆检测到用户在HMI上二排左侧区域向上滑动的操作时,车辆可以将二排左侧区域附近的扬声器的播放强度从p调整为1.5p。
本申请实施例中,第一设备通过获取多个用户所在的多个区域的位置信息,并根据多个区域的位置信息以及多个发声装置的位置信息,控制多个发声装置工作,无需用户手动调节发声装置,有助于降低用户的学习成本以及减少用户繁琐的操作;同时,也有助于多个用户均可以享受到好的听音效果,有助于提升用户的使用体验。
图32示出了本申请实施例提供的一种发声***3200的示意性结构图。该发声***可以包括传感器3201、控制器3202以及多个发声装置3203,其中,该发声***包括传感器、控制器和多个发声装置,其中,
传感器3201,用于采集数据并向该控制器发送该数据;
控制器3202,用于根据该数据,获取多个用户所在的多个区域的位置信息;根据该多个区域的位置信息与该多个发声装置的位置信息,控制该多个发声装置3203工作。可选地,所述数据包括图像信息、压力信息和声音信息中的至少一种。
可选地,控制器3202具体用于:确定声场优化中心点,该声场优化中心点到该多个区域中每个区域的中心点的距离相等;根据该声场优化中心点与该多个发声装置中每个发声装置之间的距离,控制该多个发声装置中每个发声装置工作。
可选地,控制器3202还用于向第一提示装置发送第一指令,该第一指令用于指示该第一提示装置提示该声场优化中心点的位置信息。
可选地,该多个区域为车辆座舱内的区域。
可选地,该多个区域包括前排区域和后排区域。
可选地,该多个区域可以包括主驾区域和副驾区域。
可选地,控制器3202,还用于向第二提示装置发送第二指令,该第二指令用于指示该第二提示装置提示该多个用户所在的多个区域的位置信息。
可选地,控制器3202具体用于:对该多个发声装置3203中每个发声装置的播放强度进行调节。
可选地,该多个发声装置3203包括第一发声装置,控制器3202具体用于:控制该第 一发声装置的播放强度为第一播放强度;控制器3202,还用于获取到用户将该第一发声装置的播放强度从该第一播放强度调节至第二播放强度的第三指令;响应于获取到该第三指令,将该第一发声装置的播放强度调节为该第二播放强度。
图33是本申请实施例的装置3300的示意性框图,该装置3300包括收发单元3301和处理单元3302,其中,收发单元3301,用于接收传感信息;处理单元3302,用于根据传感信息,获取多个用户所在的多个区域的位置信息;处理单元3302,还用于根据该多个用户所在的多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作。
可选地,处理单元3302,还用于根据该多个区域的位置信息与多个发声装置的位置信息,控制该多个发声装置工作,包括:该处理单元3302用于:确定声场优化中心点,该声场优化中心点到该多个区域中每个区域的中心点的距离相等;根据该声场优化中心点与该多个发声装置中每个发声装置之间的距离,控制该多个发声装置中每个发声装置工作。
可选地,该收发单元3301,还用于向第一提示单元发送第一指令,该第一指令用于指示该第一提示单元提示该声场优化中心点的位置信息。
可选地,该多个区域为车辆座舱内的区域。
可选地,该多个区域包括前排区域和后排区域。
可选地,该多个区域包括主驾区域和副驾区域。
可选地,该收发单元3301,还用于向第二提示单元发送第二指令,该第二指令用于指示该第二提示单元提示该多个用户所在的多个区域的位置信息。
可选地,该处理单元3302具体用于:对该多个发声装置中每个发声装置的播放强度进行调节。
可选地,该多个发声装置包括第一发声装置,该处理单元3302具体用于:控制该第一发声装置的播放强度为第一播放强度;该收发单元3301,还用于接收第三指令,该第三指令为指示将该第一发声装置的播放强度从该第一播放强度调节至第二播放强度的指令;该处理单元3302,还用于将该第一发声装置的播放强度调节为该第二播放强度。
可选地,该传感信息包括图像信息、压力信息和声音信息中的一个或多个。
本申请实施例还提供了一种装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行上述发声装置的控制方法。
可选地,上述处理单元可以是图1所示的处理器151,上述存储单元可以是图1所示的存储器152,其中存储器152可以是芯片内的存储单元(例如,寄存器、缓存等),也可以是车辆内位于上述芯片外部的存储单元(例如,只读存储器、随机存取存储器等)。
本申请实施例还提供了供一种车辆,包括上述发声***3200或者上述装置3300。
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
本申请实施例还提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
在实现过程中,上述方法的各步骤可以通过处理器151中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完 成,或者用处理器151中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器151读取存储器152中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,该处理器151可以为中央处理单元(central processing unit,CPU),该处理器151还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
还应理解,本申请实施例中,该存储器152可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。
在本申请实施例中,“第一”、“第二”以及各种数字编号仅为描述方便进行的区分,并不用来限制本申请实施例的范围。例如,区分不同的管路、通孔等。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储 在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖。在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (25)

  1. 一种发声装置的控制方法,其特征在于,包括:
    获取多个用户所在的多个区域的位置信息;
    根据所述多个区域的位置信息与多个发声装置的位置信息,控制所述多个发声装置工作。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述多个区域的位置信息与多个发声装置的位置信息,控制所述多个发声装置工作,包括:
    确定声场优化中心点,所述声场优化中心点到所述多个区域中每个区域的中心点的距离相等;
    根据所述声场优化中心点与所述多个发声装置中每个发声装置之间的距离,控制所述多个发声装置中每个发声装置工作。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    提示所述声场优化中心点的位置信息。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述多个区域为车辆座舱内的区域。
  5. 根据权利要求4所述的方法,其特征在于,所述多个区域包括前排区域和后排区域。
  6. 根据权利要求4所述的方法,其特征在于,所述多个区域包括主驾区域和副驾区域。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:
    提示所述多个用户所在的多个区域的位置信息。
  8. 根据权利要求1至7中任一项所述方法,其特征在于,所述控制所述多个发声装置工作,包括:
    对所述多个发声装置中每个发声装置的播放强度进行调节。
  9. 根据权利要求8所述的方法,其特征在于,所述多个发声装置包括第一发声装置,所述对所述多个发声装置中每个发声装置的播放强度进行调节,包括:
    控制所述第一发声装置的播放强度为第一播放强度;
    其中,所述方法还包括:
    获取到用户将所述第一发声装置的播放强度从所述第一播放强度调节至第二播放强度的指令;
    响应于获取到所述指令,将所述第一发声装置的播放强度调节为所述第二播放强度。
  10. 根据权利要求1至9任一所述的方法,其特征在于,所述获取多个用户所在的多个区域的位置信息,包括:
    获取传感信息;
    根据所述传感信息,确定所述多个区域的位置信息;
    其中,所述传感信息包括图像信息、压力信息和声音信息中的一个或多个。
  11. 一种电子装置,其特征在于,包括:
    收发单元,用于接收传感信息;
    处理单元,用于根据所述传感信息,获取多个用户所在的多个区域的位置信息;
    处理单元,还用于根据所述多个区域的位置信息与多个发声装置的位置信息,控制所述多个发声装置工作。
  12. 根据权利要求11所述的电子装置,其特征在于,处理单元,还用于根据所述多个区域的位置信息与多个发声装置的位置信息,控制所述多个发声装置工作,包括:
    所述处理单元用于:
    确定声场优化中心点,所述声场优化中心点到所述多个区域中每个区域的中心点的距离相等;
    根据所述声场优化中心点与所述多个发声装置中每个发声装置之间的距离,控制所述多个发声装置中每个发声装置工作。
  13. 根据权利要求12所述的电子装置,其特征在于,所述收发单元,还用于向第一提示单元发送第一指令,所述第一指令用于指示所述第一提示单元提示所述声场优化中心点的位置信息。
  14. 根据权利要求11至13中任一项所述的电子装置,其特征在于,所述多个区域为车辆座舱内的区域。
  15. 根据权利要求14所述的电子装置,其特征在于,所述多个区域包括前排区域和后排区域。
  16. 根据权利要求14所述的电子装置,其特征在于,所述多个区域包括主驾区域和副驾区域。
  17. 根据权利要求11至16中任一项所述的电子装置,其特征在于,所述收发单元,还用于向第二提示单元发送第二指令,所述第二指令用于指示所述第二提示单元提示所述多个用户所在的多个区域的位置信息。
  18. 根据权利要求11至17中任一项所述的电子装置,其特征在于,所述处理单元具体用于:对所述多个发声装置中每个发声装置的播放强度进行调节。
  19. 根据权利要求18所述的电子装置,其特征在于,所述多个发声装置包括第一发声装置,所述处理单元具体用于:控制所述第一发声装置的播放强度为第一播放强度;
    所述收发单元,还用于接收第三指令,所述第三指令为指示将所述第一发声装置的播放强度从所述第一播放强度调节至第二播放强度的指令;
    所述处理单元,还用于将所述第一发声装置的播放强度调节为所述第二播放强度。
  20. 根据权利要求11至19中任一项所述的电子装置,其特征在于,所述传感信息包括图像信息、压力信息和声音信息中的一个或多个。
  21. 一种电子装置,其特征在于,包括:
    存储器,用于存储指令;
    处理器,用于读取所述指令,以执行如权利要求1至10中任意一项所述的方法。
  22. 一种***,其特征在于,包括传感器和电子装置,其中,所述电子装置为权利要求11至21中任一项所述的电子装置。
  23. 根据权利要求22所述的***,其特征在于,所述***还包括多个发声装置。
  24. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储有程序代码, 当所述程序代码在计算机上运行时,使得计算机执行如权利要求1至10中任意一项所述的方法。
  25. 一种车辆,其特征在于,所述车辆包括如权利要求11至21中任一项所述的电子装置,或者,所述车辆包括如权利要求22或23所述的***。
PCT/CN2022/102818 2021-06-30 2022-06-30 一种发声装置的控制方法、发声***以及车辆 WO2023274361A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22832173.3A EP4344255A1 (en) 2021-06-30 2022-06-30 Method for controlling sound production apparatuses, and sound production system and vehicle
US18/400,108 US20240236599A9 (en) 2021-06-30 2023-12-29 Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110744208.4 2021-06-29
CN202110744208.4A CN113596705B (zh) 2021-06-30 2021-06-30 一种发声装置的控制方法、发声***以及车辆

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/400,108 Continuation US20240236599A9 (en) 2021-06-30 2023-12-29 Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle

Publications (1)

Publication Number Publication Date
WO2023274361A1 true WO2023274361A1 (zh) 2023-01-05

Family

ID=78245719

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/102818 WO2023274361A1 (zh) 2021-06-30 2022-06-30 一种发声装置的控制方法、发声***以及车辆

Country Status (3)

Country Link
EP (1) EP4344255A1 (zh)
CN (1) CN113596705B (zh)
WO (1) WO2023274361A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596705B (zh) * 2021-06-30 2023-05-16 华为技术有限公司 一种发声装置的控制方法、发声***以及车辆
CN114038240B (zh) * 2021-11-30 2023-05-05 东风商用车有限公司 一种商用车声场控制方法、装置及设备
CN117985035A (zh) * 2022-10-28 2024-05-07 华为技术有限公司 一种控制方法、装置和运载工具

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103220597A (zh) * 2013-03-29 2013-07-24 苏州上声电子有限公司 车内声场均衡装置
CN103220594A (zh) * 2012-01-20 2013-07-24 新昌有限公司 用于车辆的音效调控***
CN104270695A (zh) * 2014-09-01 2015-01-07 歌尔声学股份有限公司 一种自动调整车内声场分布的方法和***
CN107592588A (zh) * 2017-07-18 2018-01-16 科大讯飞股份有限公司 声场调节方法及装置、存储介质、电子设备
CN108551623A (zh) * 2018-05-15 2018-09-18 上海博泰悦臻网络技术服务有限公司 车辆及其基于声音识别的音频播放调节方法
CN108834030A (zh) * 2018-09-28 2018-11-16 广州小鹏汽车科技有限公司 一种车内声场调节方法及音频***
US20190141465A1 (en) * 2016-04-29 2019-05-09 Sqand Co. Ltd. System for correcting sound space inside vehicle
WO2019112087A1 (ko) * 2017-12-06 2019-06-13 주식회사 피티지 차량용 지향성 음향시스템
CN109922411A (zh) * 2019-01-29 2019-06-21 惠州市华智航科技有限公司 声场控制方法及声场控制***
CN110149586A (zh) * 2019-05-23 2019-08-20 贵安新区新特电动汽车工业有限公司 声音调整方法及装置
CN112312280A (zh) * 2019-07-31 2021-02-02 北京地平线机器人技术研发有限公司 一种车内声音播放方法及装置
US20210152939A1 (en) * 2019-11-19 2021-05-20 Analog Devices, Inc. Audio system speaker virtualization
CN113596705A (zh) * 2021-06-30 2021-11-02 华为技术有限公司 一种发声装置的控制方法、发声***以及车辆

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9510126B2 (en) * 2012-01-11 2016-11-29 Sony Corporation Sound field control device, sound field control method, program, sound control system and server
CN204316717U (zh) * 2014-09-01 2015-05-06 歌尔声学股份有限公司 一种自动调整车内声场分布的***
US9509820B2 (en) * 2014-12-03 2016-11-29 Harman International Industries, Incorporated Methods and systems for controlling in-vehicle speakers
DK179663B1 (en) * 2015-10-27 2019-03-13 Bang & Olufsen A/S Loudspeaker with controlled sound fields
CN113055810A (zh) * 2021-03-05 2021-06-29 广州小鹏汽车科技有限公司 音效控制方法、装置、***、车辆以及存储介质

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103220594A (zh) * 2012-01-20 2013-07-24 新昌有限公司 用于车辆的音效调控***
CN103220597A (zh) * 2013-03-29 2013-07-24 苏州上声电子有限公司 车内声场均衡装置
CN104270695A (zh) * 2014-09-01 2015-01-07 歌尔声学股份有限公司 一种自动调整车内声场分布的方法和***
US20190141465A1 (en) * 2016-04-29 2019-05-09 Sqand Co. Ltd. System for correcting sound space inside vehicle
CN107592588A (zh) * 2017-07-18 2018-01-16 科大讯飞股份有限公司 声场调节方法及装置、存储介质、电子设备
WO2019112087A1 (ko) * 2017-12-06 2019-06-13 주식회사 피티지 차량용 지향성 음향시스템
CN108551623A (zh) * 2018-05-15 2018-09-18 上海博泰悦臻网络技术服务有限公司 车辆及其基于声音识别的音频播放调节方法
CN108834030A (zh) * 2018-09-28 2018-11-16 广州小鹏汽车科技有限公司 一种车内声场调节方法及音频***
CN109922411A (zh) * 2019-01-29 2019-06-21 惠州市华智航科技有限公司 声场控制方法及声场控制***
CN110149586A (zh) * 2019-05-23 2019-08-20 贵安新区新特电动汽车工业有限公司 声音调整方法及装置
CN112312280A (zh) * 2019-07-31 2021-02-02 北京地平线机器人技术研发有限公司 一种车内声音播放方法及装置
US20210152939A1 (en) * 2019-11-19 2021-05-20 Analog Devices, Inc. Audio system speaker virtualization
CN113596705A (zh) * 2021-06-30 2021-11-02 华为技术有限公司 一种发声装置的控制方法、发声***以及车辆

Also Published As

Publication number Publication date
EP4344255A1 (en) 2024-03-27
CN113596705A (zh) 2021-11-02
CN113596705B (zh) 2023-05-16
US20240137721A1 (en) 2024-04-25

Similar Documents

Publication Publication Date Title
US20210280055A1 (en) Feedback performance control and tracking
WO2021052213A1 (zh) 调整油门踏板特性的方法和装置
WO2023274361A1 (zh) 一种发声装置的控制方法、发声***以及车辆
US10286905B2 (en) Driver assistance apparatus and control method for the same
WO2022000448A1 (zh) 车内隔空手势的交互方法、电子装置及***
WO2022205243A1 (zh) 一种变道区域获取方法以及装置
WO2020031812A1 (ja) 情報処理装置、情報処理方法、情報処理プログラム、及び移動体
EP3892960A1 (en) Systems and methods for augmented reality in a vehicle
CN115042821B (zh) 车辆控制方法、装置、车辆及存储介质
WO2021217575A1 (zh) 用户感兴趣对象的识别方法以及识别装置
WO2024093768A1 (zh) 一种车辆告警方法以及相关设备
CN115056784B (zh) 车辆控制方法、装置、车辆、存储介质及芯片
CN114828131B (zh) 通讯方法、介质、车载通讯***、芯片及车辆
CN115170630A (zh) 地图生成方法、装置、电子设备、车辆和存储介质
US20240236599A9 (en) Sound-Making Apparatus Control Method, Sound-Making System, and Vehicle
CN114572219B (zh) 自动超车方法、装置、车辆、存储介质及芯片
CN114771514B (zh) 车辆行驶控制方法、装置、设备、介质、芯片及车辆
CN114802435B (zh) 车辆控制方法、装置、车辆、存储介质及芯片
CN115297434B (zh) 服务调用方法、装置、车辆、可读存储介质及芯片
WO2023050058A1 (zh) 控制车载摄像头的视角的方法、装置以及车辆
CN115535004B (zh) 距离生成方法、装置、存储介质及车辆
CN115063639B (zh) 生成模型的方法、图像语义分割方法、装置、车辆及介质
WO2024131698A1 (zh) 一种车辆中座椅的调整方法、泊车方法以及相关设备
CN114802217B (zh) 确定泊车模式的方法、装置、存储介质及车辆
WO2023106235A1 (ja) 情報処理装置、情報処理方法、および車両制御システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22832173

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: MX/A/2023/015457

Country of ref document: MX

WWE Wipo information: entry into national phase

Ref document number: 2022832173

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023579623

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022832173

Country of ref document: EP

Effective date: 20231221