CN115278462A - In-vehicle audio processing method and system, electronic equipment and storage medium - Google Patents

In-vehicle audio processing method and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN115278462A
CN115278462A CN202210911455.3A CN202210911455A CN115278462A CN 115278462 A CN115278462 A CN 115278462A CN 202210911455 A CN202210911455 A CN 202210911455A CN 115278462 A CN115278462 A CN 115278462A
Authority
CN
China
Prior art keywords
audio
target
vehicle
region
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210911455.3A
Other languages
Chinese (zh)
Other versions
CN115278462B (en
Inventor
连星
杨森
杨明灯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210911455.3A priority Critical patent/CN115278462B/en
Publication of CN115278462A publication Critical patent/CN115278462A/en
Application granted granted Critical
Publication of CN115278462B publication Critical patent/CN115278462B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/13Acoustic transducers and sound field adaptation in vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Otolaryngology (AREA)
  • Psychiatry (AREA)
  • Mechanical Engineering (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Multimedia (AREA)
  • Hospice & Palliative Care (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The application provides an in-vehicle audio processing method, a system, an electronic device and a storage medium, comprising: acquiring audio information input by a target object in a target audio region, identifying the audio information, and generating an audio interaction request associated with the target object; driving a digital signal processor to mix sound of a plurality of sound sources based on the audio interaction request to generate audio to be played; and finally, calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through data distribution service, and playing the audio to be played in the target audio region through the audio output device. The application realizes software service of the audio system by DDS communication, can quickly organize application layer service logic according to new scene requirements, and realizes a newly added or personalized sound function. The method and the device can provide different audio services for a plurality of sound zones at the same time, so that the audio zones in the vehicle are realized, and the driving experience of a driver and passengers is improved.

Description

In-vehicle audio processing method and system, electronic equipment and storage medium
Technical Field
The application relates to the technical field of automobile control, in particular to an in-automobile audio processing method, an in-automobile audio processing system, electronic equipment and a storage medium.
Background
With the intellectualization, digitalization, networking and service of automobiles, the requirements of automobile systems are becoming more and more complex. The voice is used as an important carrier of man-machine interaction, and the user has more and more demands on the voice and is more and more complex. Therefore, the traditional whole vehicle audio system needs to be optimized and upgraded. However, when the audio system of the entire vehicle is optimized and upgraded at present, the traditional audio system of the entire vehicle has the following problems:
1. the sound source contents played by all the loudspeakers or earphones of the whole vehicle are basically the same.
2. The whole vehicle is provided with only one or two microphones, and only one user and the whole vehicle voice interaction is supported at one time point.
3. The entertainment system and the meter have respective sound systems, and cannot coordinate and uniformly provide personalized services for the sound of the user.
4. Because the whole vehicle audio software is not software-serviced, newly added sound needs to be changed greatly, development time is long, and even a software architecture needs to be changed.
5. The sound system is not intelligent enough, and can not actively identify the emotion of the user to actively control the sound regulation atmosphere.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present application provides an in-vehicle audio processing method, system, electronic device and storage medium to solve the above technical problems.
The application provides an in-vehicle audio processing method, which comprises the following steps:
acquiring audio information input by a target object in a target audio region; wherein the target audio region comprises at least one of a plurality of audio regions preset within a target vehicle, the target object comprises at least one person associated with the target vehicle, and the target vehicle comprises a vehicle determined in advance or in real time;
identifying the audio information and generating an audio interaction request associated with the target object;
driving a digital signal processor to mix sound of a plurality of sound sources based on the audio interaction request to generate audio to be played;
calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through a data distribution service, and playing the audio to be played in the target audio region through the audio output device; wherein the audio output device is located at the target audio region.
In an embodiment of the present application, after acquiring audio information input by a target object in a target audio region, the method further includes:
shooting the target object to obtain a corresponding target image;
identifying the target image to acquire the expression and emotion of the target object;
and calling the digital signal processor based on the expression and emotion of the target object, and adjusting an air conditioner, a vehicle window, an atmosphere lamp and music in the target vehicle.
In an embodiment of the present application, after acquiring audio information input by a target object in a target audio region, the method further includes:
acquiring an interactive signal of the target object on a preset display screen;
calling the digital signal processor based on the interactive signal, and acquiring at least one video from a preset storage area;
and transmitting the acquired video to the preset display screen for playing.
In one embodiment of the present application, the presetting of the plurality of audio regions in the target vehicle includes: the driving domain audio frequency region is respectively connected with the cockpit domain audio frequency region and the vehicle control domain audio frequency region, and the cockpit domain audio frequency region is also connected with the vehicle control domain audio frequency region; wherein any two of the driving domain audio frequency region, the cockpit domain audio frequency region and the vehicle control domain audio frequency region are subjected to data transmission through a data distribution service protocol.
In an embodiment of the present application, the driving domain audio frequency region, the cockpit domain audio frequency region, and the vehicle control domain audio frequency region are configured with: the device comprises a display screen, a camera, a microphone, a loudspeaker and an earphone;
the microphone is used for acquiring audio information input by a target object in a target audio region;
the loudspeaker or the earphone forms the audio output device and is used for playing the audio to be played in the target audio frequency area;
the display screen is used for playing or displaying videos;
the camera is used for shooting a target object.
In an embodiment of the application, the process of driving the digital signal processor to mix a plurality of sound sources based on the audio interaction request and generating the audio to be played includes:
responding to the audio interaction request, and driving the digital signal processor to screen out a sound source corresponding to the audio interaction request from a plurality of preset sound sources according to a response result, and recording the sound source as a target sound source;
and carrying out sound mixing, sound effect processing, sound field processing and gain control on the target sound source to generate audio to be played.
In an embodiment of the present application, the predetermined plurality of sound sources includes: multimedia music sound source, navigation sound source, voice sound source, bluetooth telephone sound source, key sound source, alarm sound source, game background sound source and video background sound source.
The application also provides an in-vehicle audio processing system, the system including:
the audio acquisition module is used for acquiring audio information input by a target object in a target audio region; wherein the target audio region comprises at least one of a plurality of audio regions preset within a target vehicle, the target object comprises at least one person associated with the target vehicle, and the target vehicle comprises a vehicle determined in advance or in real time;
the audio identification module is used for identifying the audio information and generating an audio interaction request associated with the target object;
the audio mixing module is used for driving the digital signal processor to mix audio of a plurality of sound sources according to the audio interaction request so as to generate audio to be played;
the audio playing module is used for calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through a data distribution service, and playing the audio to be played in the target audio region through the audio output device; wherein the audio output device is located at the target audio region.
The present application further provides an electronic device, including:
one or more processors;
a storage device for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the in-vehicle audio processing method as in any one of the above.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the in-vehicle audio processing method as defined in any one of the above.
As described above, the present application provides an in-vehicle audio processing method, system, electronic device, and storage medium, which have the following beneficial effects:
firstly, acquiring audio information input by a target object in a target audio region, identifying the audio information, and generating an audio interaction request associated with the target object; then, driving a digital signal processor to carry out sound mixing on a plurality of sound sources based on the audio interaction request to generate audio to be played; finally, calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through data distribution service, and playing the audio to be played in a target audio region through the audio output device; wherein the target audio region comprises at least one of a plurality of audio regions preset within the target vehicle, the target object comprises at least one person associated with the target vehicle, the target vehicle comprises a predetermined or real-time determined vehicle, and the audio output device is located in the target audio region. Therefore, the application realizes software Service of the audio system by utilizing DDS (Data Distribution Service, DDS for short) communication, can quickly organize the Service logic of the application layer according to new scene requirements, and realizes the function of newly added or personalized sound. And this application can provide different audio service for a plurality of sound districts at the same time to realize the interior audio frequency subregion of car, promote driver and passenger's the experience of riding. In addition, the activity and emotion of the user in each audio region are recognized through an AI (Artificial Intelligence, AI for short) module, and the sound source content and volume of each audio region can be controlled through a series of algorithm strategies. Meanwhile, each audio frequency region can be in voice conversation with the whole vehicle through a respective microphone, and personalized man-machine interaction is achieved. For example, the air conditioner temperature or the blowing mode of a certain area is controlled, the volume of a certain area is controlled, and the sound source of a certain area is controlled. Each audio frequency region can interact with the whole vehicle through a display screen, and personalized human-computer interaction is achieved. For example, the main driving vehicle broadcasts navigation road condition information in real time; the copilot is connected with Bluetooth and plays Bluetooth music; playing the film on the rear left display screen; the right display screen at the back row is mapped to the mobile phone game to play the game background sound. And the vehicle control domain can provide a complete basic service interface of the audio system, so that the driving domain and the cockpit domain can quickly organize the service logic of an application layer according to the scene requirements, and the function of newly adding or personalizing sound is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic diagram of an exemplary system architecture to which aspects of one or more embodiments of the present application may be applied;
fig. 2 is a schematic flowchart of an in-vehicle audio processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a DDS communication of an audio region provided in an embodiment of the present application;
fig. 4 is a schematic diagram of vehicle interior audio region division provided by an embodiment of the present application;
fig. 5 is a schematic diagram of audio mixing of audio channels according to an embodiment of the present application;
fig. 6 is a schematic hardware configuration diagram of an in-vehicle audio processing system according to an embodiment of the present application;
fig. 7 is a schematic hardware configuration diagram of an in-vehicle audio processing system according to another embodiment of the present application;
fig. 8 is a hardware configuration diagram of an electronic device suitable for implementing one or more embodiments of the present application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the disclosure herein, wherein the embodiments of the present application will be described in detail with reference to the accompanying drawings and preferred embodiments. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It should be understood that the preferred embodiments are for purposes of illustration only and are not intended to limit the scope of the present disclosure.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present application, and the drawings only show the components related to the present application and are not drawn according to the number, shape and size of the components in actual implementation, and the type, number and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of the embodiments of the present application, however, it will be apparent to one skilled in the art that the embodiments of the present application may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring the embodiments of the present application.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which technical solutions in one or more embodiments of the present application may be applied. As shown in fig. 1, system architecture 100 may include terminal device 110, network 120, and server 130. The terminal device 110 may include various electronic devices such as a smart phone, a tablet computer, a notebook computer, and a desktop computer. The server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. Network 120 may be a communication medium of various connection types capable of providing a communication link between terminal device 110 and server 130, such as a wired communication link or a wireless communication link.
The system architecture in the embodiments of the present application may have any number of terminal devices, networks, and servers, according to implementation needs. For example, the server 130 may be a server group composed of a plurality of server devices. In addition, the technical solution provided in the embodiment of the present application may be applied to the terminal device 110, may also be applied to the server 130, or may be implemented by both the terminal device 110 and the server 130, which is not particularly limited in this application.
In an embodiment of the present application, the terminal device 110 or the server 130 of the present application may obtain audio information input by a target object in a target audio region, and then identify the audio information to generate an audio interaction request associated with the target object; then, driving a digital signal processor to carry out sound mixing on a plurality of sound sources based on the audio interaction request to generate audio to be played; and finally, calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through data distribution service, and playing the audio to be played in the target audio region through the audio output device. The in-vehicle audio processing method is executed by using the terminal device 110 or the server 130, software servitization of an audio system can be realized by using DDS communication, application layer service logic can be quickly organized according to new scene requirements, and a new or personalized sound function is realized. And different audio services can be provided for a plurality of sound zones at the same time, so that the audio zones in the automobile are realized, and the driving experience of a driver and passengers is improved. In addition, the activities and emotions of the users in each audio region are recognized through the AI module, and the sound source content and the volume of each audio region can be controlled respectively through a series of algorithm strategies. Meanwhile, each audio frequency region can be in voice conversation with the whole vehicle through a microphone, and personalized human-computer interaction is achieved. For example, the air conditioner temperature or the blowing mode of a certain area is controlled, the volume of a certain area is controlled, and the sound source of a certain area is controlled. Each audio frequency region can interact with the whole vehicle through a display screen, and personalized human-computer interaction is achieved. For example, the main driving vehicle broadcasts navigation road condition information in real time; the auxiliary driver is connected with Bluetooth and plays Bluetooth music; playing the film on the rear left display screen; the right display screen at the back row is mapped to the mobile phone game to play the game background sound. And the vehicle control domain can provide a complete basic service interface of the audio system, so that the driving domain and the cockpit domain can quickly organize the service logic of the application layer according to the scene requirements, and the function of adding or personalizing sound is realized.
The above section introduces the content of an exemplary system architecture to which the technical solution of the present application is applied, and then continues to introduce the in-vehicle audio processing method of the present application.
Fig. 2 shows a schematic flowchart of an in-vehicle audio processing method according to an embodiment of the present application. Specifically, in an exemplary embodiment, as shown in fig. 2, the present embodiment provides an in-vehicle audio processing method, including the steps of:
s210, acquiring audio information input by a target object in a target audio region; wherein the target audio region comprises at least one of a plurality of audio regions preset within a target vehicle, the target object comprises at least one person associated with the target vehicle, and the target vehicle comprises a vehicle determined in advance or in real time. By way of example, the target objects in the present embodiment include, but are not limited to: driver, passenger; target vehicles include, but are not limited to: new energy vehicles, fuel vehicles; the preset multiple audio frequency regions in the target vehicle comprise: a driving domain audio region, a cockpit domain audio region and a vehicle control domain audio region.
S220, identifying the audio information and generating an audio interaction request associated with the target object;
and S230, driving a Digital Signal Processor (DSP) to mix sound of a plurality of sound sources based on the audio interaction request, and generating audio to be played.
S240, calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through a data distribution service, and playing the audio to be played in the target audio region through the audio output device; wherein the audio output device is located at the target audio region.
Therefore, in the embodiment, the software servitization of the audio system is realized by using the data distribution service DDS communication, the application layer service logic can be quickly organized according to new scene requirements, and the newly added or personalized sound function is realized. In addition, different audio services can be provided for a plurality of sound zones at the same time, so that the audio zones in the automobile are realized, and the driving experience of a driver and passengers is improved. In addition, each audio frequency region can be in voice conversation with the whole vehicle through a microphone, and personalized human-computer interaction is achieved. For example, the air-conditioning temperature or the blowing mode of a certain area is controlled, the volume of a certain area is controlled, and the sound source of a certain area is controlled. Each audio frequency area can interact with the whole vehicle through a display screen, and personalized man-machine interaction is achieved. For example, the main driving vehicle broadcasts navigation road condition information in real time; the copilot is connected with Bluetooth and plays Bluetooth music; playing the film on the rear left display screen; the right display screen at the back row is mapped to the mobile phone game to play the game background sound.
In an exemplary embodiment, after acquiring the audio information input by the target object in the target audio region, the method further includes: shooting the target object to obtain a corresponding target image; identifying the target image to acquire the expression and emotion of the target object; and calling the digital signal processor based on the expression and emotion of the target object, and adjusting an air conditioner, a vehicle window, an atmosphere lamp and music in the target vehicle. Therefore, in the embodiment, the AI cameras in each audio area can acquire the expression and emotion of the user, actively adjust the air conditioner, the windows, the atmosphere lamps and the music in the vehicle, and enable the mood of the user to be comfortable and pleasant.
In an exemplary embodiment, after acquiring the audio information input by the target object in the target audio region, the method further includes: acquiring an interactive signal of the target object on a preset display screen; calling the digital signal processor based on the interactive signal, and acquiring at least one video from a preset storage area; and transmitting the acquired video to the preset display screen for playing. Therefore, in the embodiment, the user in each audio region can touch the display screen and then view the video played on the display screen, so that human-computer interaction is realized.
In an exemplary embodiment, the presetting of the plurality of audio regions in the target vehicle in the present embodiment includes: the system comprises a driving domain audio frequency region, a cockpit domain audio frequency region and a vehicle control domain audio frequency region, wherein the driving domain audio frequency region is respectively connected with the cockpit domain audio frequency region and the vehicle control domain audio frequency region, and the cockpit domain audio frequency region is also connected with the vehicle control domain audio frequency region; and any two of the driving domain audio frequency region, the cockpit domain audio frequency region and the vehicle control domain audio frequency region carry out data transmission through a DDS protocol. In the embodiment, a complete basic service interface of the audio system can be provided through the vehicle control domain, so that the driving domain and the cabin domain can quickly organize the service logic of the application layer according to the scene requirements, and the function of adding or personalizing sound is realized. As shown in fig. 3, in this embodiment, software service of sound partition of the entire vehicle is implemented by using a DDS protocol between domains. The vehicle control domain mainly drives the DSP chip, and functions which can be realized by the DSP chip are converted into corresponding basic service interfaces, so that the cockpit domain and the driving domain call the corresponding DSP basic service interfaces to realize different sound requirements in a specific scene. The cockpit area and the driving area compile reasonable service combinations according to different requirements of users in different scenes, and functions can be quickly realized only by reorganizing and developing upper-layer service logic software and calling a DSP service interface, so that the requirements of the users are met to the maximum.
As an example, the driving domain audio region, the cockpit domain audio region, and the vehicle control domain audio region in this embodiment are configured with: display screen, camera, microphone, loudspeaker and earphone. The microphone is used for acquiring audio information input by a target object in a target audio region; the loudspeaker or the earphone forms the audio output device and is used for playing the audio to be played in the target audio frequency area; the display screen is used for playing or displaying videos; the camera is used for shooting a target object. Specifically, as shown in fig. 4, the vehicle in the present embodiment may be divided into four audio regions, including a main driving audio region, a vice driving audio region, a rear left audio region, and a rear right audio region. Wherein each audio frequency area is provided with a respective display screen, an AI camera, a microphone and a loudspeaker/earphone. Therefore, in the embodiment, the user in each audio region can touch the display screen or watch the video played by the display screen, so as to realize human-computer interaction. And the users in the audio regions can carry out voice conversation through the microphone, so that man-machine interaction is realized. Users in each audio frequency area can listen to music and books through the loudspeaker/earphone, and therefore man-machine interaction is achieved. The AI cameras in each audio area can acquire the expression and emotion of a user and actively adjust an air conditioner, a window, atmosphere lamps and music in the automobile, so that the mood of the user tends to be comfortable and pleasant.
In an exemplary embodiment, the process of generating audio to be played by driving a digital signal processor to mix a plurality of sound sources based on the audio interaction request includes: responding to the audio interaction request, and driving the digital signal processor to screen out a sound source corresponding to the audio interaction request from a plurality of preset sound sources according to a response result, and recording the sound source as a target sound source; and carrying out sound mixing, sound effect processing, sound field processing and gain control on the target sound source to generate audio to be played. Specifically, as shown in fig. 5, the sound sources in this embodiment include but are not limited to: the audio-frequency-based multi-media audio system comprises a multi-media music audio source, a navigation audio source, a voice audio source, a Bluetooth telephone audio source, a key audio source, an alarm audio source, a game background audio source, a video background audio source, an Ecall/Bcall and the like, wherein the audio sources can be mixed in a DSP (digital signal processor), and the sound effect processing, noise reduction, gain control, sound field control and the like are carried out on the specific audio source, and then the sound source is respectively output to loudspeakers/earphones in different audio frequency areas. For example, a driving audio region plays multimedia music and navigation; the copilot audio area plays a Bluetooth telephone; playing the film in the rear left audio frequency region; and playing the mobile phone game in the rear-row right audio frequency area.
In summary, the present application provides an in-vehicle audio processing method, which includes acquiring audio information input by a target object in a target audio region, identifying the audio information, and generating an audio interaction request associated with the target object; then, based on the audio interaction request, driving a digital signal processor to mix the sound of a plurality of sound sources to generate audio to be played; finally, calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through data distribution service, and playing the audio to be played in a target audio region through the audio output device; wherein the target audio region comprises at least one of a plurality of audio regions preset within the target vehicle, the target object comprises at least one person associated with the target vehicle, the target vehicle comprises a vehicle determined in advance or in real time, and the audio output device is located at the target audio region. Therefore, the method realizes the software service of the audio system by utilizing DDS communication, can quickly organize the service logic of the application layer according to new scene requirements, and realizes the function of adding or personalizing sound. In addition, the method can provide different audio services for the plurality of sound zones at the same time, so that the audio zones in the automobile are realized, and the driving experience of a driver and passengers is improved. In addition, the method identifies the activities and emotions of the users in each audio frequency area through the AI module, and can respectively control the sound source content and the volume of each audio frequency area through a series of algorithm strategies. Meanwhile, each audio frequency region can be in voice conversation with the whole vehicle through a microphone, and personalized human-computer interaction is achieved. For example, the air-conditioning temperature or the blowing mode of a certain area is controlled, the volume of a certain area is controlled, and the sound source of a certain area is controlled. Each audio frequency region can interact with the whole vehicle through a display screen, and personalized human-computer interaction is achieved. For example, the main driving vehicle broadcasts navigation road condition information in real time; the copilot is connected with Bluetooth and plays Bluetooth music; playing the film on the rear left display screen; the right display screen at the back row is mapped to the mobile phone game to play the game background sound. And the vehicle control domain can provide a complete basic service interface of the audio system, so that the driving domain and the cockpit domain can quickly organize the service logic of an application layer according to the scene requirements, and the function of newly adding or personalizing sound is realized. In addition, compared with the prior art, the audio content output by each loudspeaker can be determined by the upper layer application. The cockpit area, the driving area and the vehicle airspace can flexibly control the audio system according to the scene requirements. Besides the audio amplification function, the method also supports the functions of audio mixing, sound effect processing, sound field control, noise reduction, voice recognition, bluetooth communication, abnormal alarm and the like, and is an audio system based on the whole vehicle design. Based on a data-oriented service framework and newly added audio scene functions, the method can be realized as long as the upper application development scene service calls the atom/added service provided by the bottom layer, and OTA iteration is easy.
As shown in fig. 6, the present application further provides an in-vehicle audio processing system, which includes:
the audio acquisition module 610 is configured to acquire audio information input by a target object in a target audio region; wherein the target audio region comprises at least one of a plurality of audio regions preset within a target vehicle, the target object comprises at least one person associated with the target vehicle, and the target vehicle comprises a vehicle determined in advance or in real time. By way of example, the target objects in the present embodiment include, but are not limited to: driver, passenger; target vehicles include, but are not limited to: new energy vehicles, fuel vehicles; the preset multiple audio frequency regions in the target vehicle comprise: the system comprises a driving domain audio frequency region, a cockpit domain audio frequency region and a vehicle control domain audio frequency region.
An audio identification module 620, configured to identify the audio information and generate an audio interaction request associated with the target object;
the audio mixing module 630 is configured to drive the digital signal processor to mix audio of multiple audio sources according to the audio interaction request, so as to generate an audio to be played;
the audio playing module 640 is configured to invoke a service interface of the digital signal processor, transmit the audio to be played from the service interface to an audio output device through a data distribution service, and play the audio to be played in the target audio region through the audio output device; wherein the audio output device is located in the target audio region.
Therefore, in the embodiment, the software servitization of the audio system is realized by using data distribution service DDS communication, the application layer service logic can be quickly organized according to new scene requirements, and the new or personalized sound function is realized. In addition, different audio services can be provided for a plurality of sound zones at the same time, so that the audio zones in the automobile are realized, and the driving experience of a driver and passengers is improved. In addition, each audio frequency region can be in voice conversation with the whole vehicle through a microphone, and personalized human-computer interaction is achieved. For example, the air-conditioning temperature or the blowing mode of a certain area is controlled, the volume of a certain area is controlled, and the sound source of a certain area is controlled. Each audio frequency region can interact with the whole vehicle through a display screen, and personalized human-computer interaction is achieved. For example, the main driving vehicle broadcasts navigation road condition information in real time; the auxiliary driver is connected with Bluetooth and plays Bluetooth music; playing the film on the rear left display screen; the right display screen at the back row is mapped to the mobile phone game to play the game background sound. The in-vehicle audio processing system in this embodiment is shown in fig. 7, the core of the entire system is a DSP chip, and the DSP chip supports analog-to-digital conversion, digital-to-analog conversion, hardware noise reduction, channel connection, sound mixing, sound effect processing, sound field processing, gain control, and other functions. Outside the DSP Chip, there are modules such as a System On Chip (SOC), an MCU (micro controller Unit), an instrument, a bluetooth module, an AI module, a microphone, and a power amplifier Chip, which interact with the DSP Chip.
In an exemplary embodiment, after acquiring the audio information input by the target object in the target audio region, the system further includes: shooting the target object to obtain a corresponding target image; identifying the target image to acquire the expression and emotion of the target object; and calling the digital signal processor based on the expression and emotion of the target object, and adjusting an air conditioner, a vehicle window, an atmosphere lamp and music in the target vehicle. Therefore, in the embodiment, the AI cameras in each audio area can acquire the expression and emotion of the user, actively adjust the air conditioner, the windows, the atmosphere lamps and the music in the vehicle, and enable the mood of the user to be comfortable and pleasant.
In an exemplary embodiment, after acquiring the audio information input by the target object in the target audio region, the method further includes: acquiring an interactive signal of the target object on a preset display screen; calling the digital signal processor based on the interactive signal, and acquiring at least one video from a preset storage area; and transmitting the acquired video to the preset display screen for playing. Therefore, in the embodiment, the user in each audio region can touch the setting display screen and then view the video played on the display screen, so as to realize human-computer interaction.
In an exemplary embodiment, the presetting of the plurality of audio regions in the target vehicle in the present embodiment includes: the driving domain audio frequency region is respectively connected with the cockpit domain audio frequency region and the vehicle control domain audio frequency region, and the cockpit domain audio frequency region is also connected with the vehicle control domain audio frequency region; and any two of the driving domain audio frequency region, the cockpit domain audio frequency region and the vehicle control domain audio frequency region carry out data transmission through a DDS protocol. In the embodiment, a complete basic service interface of the audio system can be provided through the vehicle control domain, so that the driving domain and the cabin domain can quickly organize the service logic of the application layer according to the scene requirements, and the function of adding or personalizing sound is realized. As shown in fig. 3, in this embodiment, software service of sound partition of the entire vehicle is implemented by using a DDS protocol between domains. The vehicle control domain mainly drives the DSP chip, and functions which can be realized by the DSP chip are converted into corresponding basic service interfaces, so that the cockpit domain and the driving domain call the corresponding DSP basic service interfaces to realize different sound requirements in a specific scene. The cockpit domain and the driving domain compile reasonable service combinations according to different requirements of users in different scenes, and functions can be quickly realized only by reorganizing and developing upper-layer service logic software and calling a DSP service interface, so that the requirements of the users are finally met to the maximum extent.
As an example, the driving range audio frequency region, the cockpit range audio frequency region, and the vehicle-controlled range audio frequency region in this embodiment are configured with: display screen, camera, microphone, loudspeaker and earphone. The microphone is used for acquiring audio information input by a target object in a target audio region; the loudspeaker or the earphone forms the audio output device and is used for playing the audio to be played in the target audio frequency area; the display screen is used for playing or displaying videos; the camera is used for shooting a target object. Specifically, as shown in fig. 4, the vehicle in the present embodiment may be divided into four audio regions, including a main driving audio region, a vice driving audio region, a rear left audio region, and a rear right audio region. Wherein, each audio frequency area all is provided with respective display screen, AI camera, microphone and loudspeaker/earphone. Therefore, in the embodiment, the user in each audio region can touch the display screen or watch the video played by the display screen, so as to realize human-computer interaction. And the users in the audio regions can carry out voice conversation through the microphone, so that man-machine interaction is realized. Users in various audio frequency areas can listen to music and books through the loudspeaker/the earphone, and therefore man-machine interaction is achieved. The AI cameras in each audio area can acquire the expression and emotion of a user and actively adjust an air conditioner, a window, atmosphere lamps and music in the automobile, so that the mood of the user tends to be comfortable and pleasant.
In an exemplary embodiment, the process of generating audio to be played by driving a digital signal processor to mix a plurality of sound sources based on the audio interaction request includes: responding to the audio interaction request, and driving the digital signal processor to screen out a sound source corresponding to the audio interaction request from a plurality of preset sound sources according to a response result, and marking the sound source as a target sound source; and carrying out sound mixing, sound effect processing, sound field processing and gain control on the target sound source to generate audio to be played. Specifically, as shown in fig. 5, the sound source in the present embodiment includes but is not limited to: the audio-frequency-based multi-media audio system comprises a multi-media music audio source, a navigation audio source, a voice audio source, a Bluetooth telephone audio source, a key audio source, an alarm audio source, a game background audio source, a video background audio source, an Ecall/Bcall and the like, wherein the audio sources can be mixed in a DSP (digital signal processor), and the sound effect processing, noise reduction, gain control, sound field control and the like are carried out on the specific audio source, and then the sound source is respectively output to loudspeakers/earphones in different audio frequency areas. For example, a driving audio region plays multimedia music and navigation; the copilot audio area plays a Bluetooth telephone; playing the film in the rear row of the left audio frequency area; and playing the mobile phone game in the rear-row right audio frequency area.
In summary, the present application provides an in-vehicle audio processing system, which first obtains audio information input by a target object in a target audio region, and then identifies the audio information to generate an audio interaction request associated with the target object; then, based on the audio interaction request, driving a digital signal processor to mix the sound of a plurality of sound sources to generate audio to be played; finally, calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through data distribution service, and playing the audio to be played in a target audio region through the audio output device; wherein the target audio region comprises at least one of a plurality of audio regions preset within the target vehicle, the target object comprises at least one person associated with the target vehicle, the target vehicle comprises a vehicle determined in advance or in real time, and the audio output device is located at the target audio region. Therefore, the system realizes software service of the audio system by utilizing DDS communication, can quickly organize the service logic of the application layer according to new scene requirements, and realizes the function of adding or personalizing sound. And the system can provide different audio services for a plurality of sound zones at the same time, thereby realizing the audio partition in the vehicle and improving the driving experience of a driver and passengers. In addition, the system identifies the activities and emotions of the users in each audio region through the AI module, and can respectively control the sound source content and the volume of each audio region through a series of algorithm strategies. Meanwhile, each audio frequency region can be in voice conversation with the whole vehicle through a microphone, and personalized human-computer interaction is achieved. For example, the air-conditioning temperature or the blowing mode of a certain area is controlled, the volume of a certain area is controlled, and the sound source of a certain area is controlled. Each audio frequency region can interact with the whole vehicle through a display screen, and personalized human-computer interaction is achieved. For example, the navigation road condition information is broadcasted in real time when the driver drives the vehicle; the auxiliary driver is connected with Bluetooth and plays Bluetooth music; playing the film on the rear left display screen; the rear row right display screen is mapped with the mobile phone game to play the game background sound. And the vehicle control domain can provide a complete basic service interface of the audio system, so that the driving domain and the cockpit domain can quickly organize the service logic of the application layer according to the scene requirements, and the function of adding or personalizing sound is realized. In addition, compared with the prior art, the audio content output by each loudspeaker can be determined by the upper-layer application. The cockpit area, the driving area and the vehicle airspace can flexibly control the audio system according to the scene requirements. Besides the audio amplification function, the system also supports the functions of audio mixing, sound effect processing, sound field control, noise reduction, voice recognition, bluetooth communication, abnormal alarm and the like, and is an audio system based on the whole vehicle design. Based on a data-oriented service framework and newly added audio scene functions, the method can be realized as long as the upper application development scene service calls the atom/added service provided by the bottom layer, and OTA iteration is easy.
It should be noted that the in-vehicle audio processing system provided in the foregoing embodiment and the in-vehicle audio processing method provided in the foregoing embodiment belong to the same concept, and specific ways for the modules and units to perform operations have been described in detail in the method embodiment, and are not described herein again. In practical applications, the in-vehicle audio processing system provided in the foregoing embodiment may distribute the functions to different functional modules according to needs, that is, divide the internal structure of the system into different functional modules to complete all or part of the functions described above, which is not limited herein.
An embodiment of the present application further provides an electronic device, including: one or more processors; a storage device, configured to store one or more programs, which when executed by the one or more processors, cause the electronic device to implement the in-vehicle audio processing method provided in each of the above embodiments.
FIG. 8 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application. It should be noted that the computer system 1000 of the electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 8, the computer system 1000 includes a Central Processing Unit (CPU) 1001 that can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1002 or a program loaded from a storage portion 1008 into a Random Access Memory (RAM) 1003. In the RAM1003, various programs and data necessary for system operation are also stored. The CPU 1001, ROM 1002, and RAM1003 are connected to each other via a bus 1004. An Input/Output (I/O) interface 1005 is also connected to the bus 1004.
The following components are connected to the I/O interface 1005: an input portion 1006 including a keyboard, a mouse, and the like; an output section 1007 including a Display panel such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a Network interface card such as a Local Area Network (LAN) card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. When the computer program is executed by a Central Processing Unit (CPU) 1001, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the in-vehicle audio processing method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the in-vehicle audio processing method provided in the above embodiments.
The above-described embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. An in-vehicle audio processing method, characterized in that the method comprises the steps of:
acquiring audio information input by a target object in a target audio region; wherein the target audio region comprises at least one of a plurality of audio regions preset within a target vehicle, the target object comprises at least one person associated with the target vehicle, and the target vehicle comprises a vehicle determined in advance or in real time;
identifying the audio information and generating an audio interaction request associated with the target object;
driving a digital signal processor to mix sound of a plurality of sound sources based on the audio interaction request to generate audio to be played;
calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through a data distribution service, and playing the audio to be played in the target audio region through the audio output device; wherein the audio output device is located at the target audio region.
2. The in-vehicle audio processing method according to claim 1, wherein after acquiring the audio information input by the target object in the target audio region, the method further comprises:
shooting the target object to obtain a corresponding target image;
identifying the target image to acquire the expression and emotion of the target object;
and calling the digital signal processor based on the expression and emotion of the target object, and adjusting an air conditioner, a vehicle window, an atmosphere lamp and music in the target vehicle.
3. The in-vehicle audio processing method according to claim 1, wherein after acquiring the audio information input by the target object in the target audio region, the method further comprises:
acquiring an interactive signal of the target object on a preset display screen;
calling the digital signal processor based on the interactive signal, and acquiring at least one video from a preset storage area;
and transmitting the acquired video to the preset display screen for playing.
4. The in-vehicle audio processing method according to any one of claims 1 to 3, wherein presetting the plurality of audio regions in the target vehicle includes: the system comprises a driving domain audio frequency region, a cockpit domain audio frequency region and a vehicle control domain audio frequency region, wherein the driving domain audio frequency region is respectively connected with the cockpit domain audio frequency region and the vehicle control domain audio frequency region, and the cockpit domain audio frequency region is also connected with the vehicle control domain audio frequency region; wherein any two of the driving domain audio frequency region, the cockpit domain audio frequency region and the vehicle control domain audio frequency region are subjected to data transmission through a data distribution service protocol.
5. The in-vehicle audio processing method according to claim 4, wherein the driving-range audio region, the cockpit-range audio region, and the vehicle-control-range audio region are pre-configured with: the device comprises a display screen, a camera, a microphone, a loudspeaker and an earphone;
the microphone is used for acquiring audio information input by a target object in a target audio region;
the loudspeaker or the earphone forms the audio output device and is used for playing the audio to be played in the target audio frequency area;
the display screen is used for playing or displaying videos;
the camera is used for shooting a target object.
6. The in-vehicle audio processing method according to claim 1, wherein the step of driving a digital signal processor to mix a plurality of sound sources based on the audio interaction request to generate audio to be played comprises:
responding to the audio interaction request, and driving the digital signal processor to screen out a sound source corresponding to the audio interaction request from a plurality of preset sound sources according to a response result, and marking the sound source as a target sound source;
and carrying out sound mixing, sound effect processing, sound field processing and gain control on the target sound source to generate audio to be played.
7. The method of claim 6, wherein the predetermined plurality of sound sources comprise: multimedia music sound source, navigation sound source, voice sound source, bluetooth telephone sound source, key sound source, alarm sound source, game background sound source and video background sound source.
8. An in-vehicle audio processing system, comprising:
the audio acquisition module is used for acquiring audio information input by a target object in a target audio region; wherein the target audio region comprises at least one of a plurality of audio regions preset within a target vehicle, the target object comprises at least one person associated with the target vehicle, and the target vehicle comprises a vehicle determined in advance or in real time;
the audio identification module is used for identifying the audio information and generating an audio interaction request associated with the target object;
the audio mixing module is used for driving the digital signal processor to mix audio of a plurality of sound sources according to the audio interaction request so as to generate audio to be played;
the audio playing module is used for calling a service interface of the digital signal processor, transmitting the audio to be played from the service interface to an audio output device through a data distribution service, and playing the audio to be played in the target audio region through the audio output device; wherein the audio output device is located at the target audio region.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the in-vehicle audio processing method of any of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the in-vehicle audio processing method according to any one of claims 1 to 7.
CN202210911455.3A 2022-07-30 2022-07-30 In-vehicle audio processing method, system, electronic equipment and storage medium Active CN115278462B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210911455.3A CN115278462B (en) 2022-07-30 2022-07-30 In-vehicle audio processing method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210911455.3A CN115278462B (en) 2022-07-30 2022-07-30 In-vehicle audio processing method, system, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115278462A true CN115278462A (en) 2022-11-01
CN115278462B CN115278462B (en) 2024-07-23

Family

ID=83747795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210911455.3A Active CN115278462B (en) 2022-07-30 2022-07-30 In-vehicle audio processing method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115278462B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115794025A (en) * 2023-02-07 2023-03-14 南京芯驰半导体科技有限公司 Vehicle-mounted audio partition output system and method
CN115878070A (en) * 2023-03-01 2023-03-31 上海励驰半导体有限公司 Vehicle-mounted audio playing method, device, equipment and storage medium
WO2024139655A1 (en) * 2022-12-30 2024-07-04 蔚来汽车科技(安徽)有限公司 Multi-device audio management method and system for in-vehicle scenario

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1524388A (en) * 2001-05-23 2004-08-25 �͵ø��ƶ�ͨ�Źɷ����޹�˾ On-line music data providing system via bluetooth car kit
US20050280524A1 (en) * 2004-06-18 2005-12-22 Applied Digital, Inc. Vehicle entertainment and accessory control system
US20180189103A1 (en) * 2017-01-05 2018-07-05 Guardknox Cyber Technologies Ltd. Specially programmed computing systems with associated devices configured to implement centralized services ecu based on services oriented architecture and methods of use thereof
US20190171409A1 (en) * 2017-12-06 2019-06-06 Harman International Industries, Incorporated Generating personalized audio content based on mood
CN110475180A (en) * 2019-08-23 2019-11-19 科大讯飞(苏州)科技有限公司 Vehicle multi-sound area audio processing system and method
CN113780062A (en) * 2021-07-26 2021-12-10 岚图汽车科技有限公司 Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip
CN114120983A (en) * 2021-12-09 2022-03-01 阿波罗智联(北京)科技有限公司 Audio data processing method and device, equipment and storage medium
CN114327041A (en) * 2021-11-26 2022-04-12 北京百度网讯科技有限公司 Multi-mode interaction method and system for intelligent cabin and intelligent cabin with multi-mode interaction method and system
CN114435279A (en) * 2022-03-11 2022-05-06 中国第一汽车股份有限公司 Vehicle area controller, vehicle control system and vehicle

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1524388A (en) * 2001-05-23 2004-08-25 �͵ø��ƶ�ͨ�Źɷ����޹�˾ On-line music data providing system via bluetooth car kit
US20050280524A1 (en) * 2004-06-18 2005-12-22 Applied Digital, Inc. Vehicle entertainment and accessory control system
US20180189103A1 (en) * 2017-01-05 2018-07-05 Guardknox Cyber Technologies Ltd. Specially programmed computing systems with associated devices configured to implement centralized services ecu based on services oriented architecture and methods of use thereof
US20190171409A1 (en) * 2017-12-06 2019-06-06 Harman International Industries, Incorporated Generating personalized audio content based on mood
CN110475180A (en) * 2019-08-23 2019-11-19 科大讯飞(苏州)科技有限公司 Vehicle multi-sound area audio processing system and method
CN113780062A (en) * 2021-07-26 2021-12-10 岚图汽车科技有限公司 Vehicle-mounted intelligent interaction method based on emotion recognition, storage medium and chip
CN114327041A (en) * 2021-11-26 2022-04-12 北京百度网讯科技有限公司 Multi-mode interaction method and system for intelligent cabin and intelligent cabin with multi-mode interaction method and system
CN114120983A (en) * 2021-12-09 2022-03-01 阿波罗智联(北京)科技有限公司 Audio data processing method and device, equipment and storage medium
CN114435279A (en) * 2022-03-11 2022-05-06 中国第一汽车股份有限公司 Vehicle area controller, vehicle control system and vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王毅: "基于SOA的整车企业技术数据集成研究", 《湖南大学学报(自然科学版)》, 31 May 2010 (2010-05-31) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024139655A1 (en) * 2022-12-30 2024-07-04 蔚来汽车科技(安徽)有限公司 Multi-device audio management method and system for in-vehicle scenario
CN115794025A (en) * 2023-02-07 2023-03-14 南京芯驰半导体科技有限公司 Vehicle-mounted audio partition output system and method
CN115878070A (en) * 2023-03-01 2023-03-31 上海励驰半导体有限公司 Vehicle-mounted audio playing method, device, equipment and storage medium
CN115878070B (en) * 2023-03-01 2023-06-02 上海励驰半导体有限公司 Vehicle-mounted audio playing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115278462B (en) 2024-07-23

Similar Documents

Publication Publication Date Title
CN115278462B (en) In-vehicle audio processing method, system, electronic equipment and storage medium
CN108597509A (en) Intelligent sound interacts implementation method, device, computer equipment and storage medium
JP2022505374A (en) Online document sharing methods, devices, electronic devices and storage media
JP2021179972A (en) Method and device for mirroring, electronic device, computer readable storage medium, and computer program
CN114724566A (en) Voice processing method, device, storage medium and electronic equipment
CN113763956A (en) Interaction method and device applied to vehicle
CN115038011A (en) Vehicle, control method, control device, control equipment and storage medium
CN112786032A (en) Display content control method, device, computer device and readable storage medium
WO2024114425A1 (en) Intelligent cabin computing power sharing architecture, computing power sharing method, device and medium
CN113220248A (en) Cross-screen display method, display equipment and vehicle
CN113436656A (en) Audio control method and device for automobile
CN116450082A (en) Cabin domain controller, software architecture thereof, intelligent cabin system and vehicle
CN115604322A (en) Intelligent cabin domain controller, control method thereof and vehicle
CN111818091B (en) Multi-person voice interaction system and method
TWI730490B (en) Display content control method, device, computer device and storage medium
CN115223582B (en) Audio noise processing method, system, electronic device and medium
CN113115172B (en) Method and system for optimizing diversified audio scenes of whole vehicle
CN111045635B (en) Audio processing method and device
CN111199519A (en) Method and device for generating special effect package
CN118051197A (en) Audio focus control method, device, electronic equipment and storage medium
CN115469824A (en) Method and device for controlling playing of vehicle-mounted audio, vehicle-mounted equipment and storage medium
CN111552469B (en) File processing method and device in application engineering and electronic equipment
CN116841950A (en) Audio data transmission method, device, chip and computer readable storage medium
CN116820283A (en) Scene arrangement method and device of vehicle-mounted virtual personal assistant
CN114723890A (en) Virtual object generation method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant