CN112291615A - Audio output method and audio output device - Google Patents

Audio output method and audio output device Download PDF

Info

Publication number
CN112291615A
CN112291615A CN202011199928.9A CN202011199928A CN112291615A CN 112291615 A CN112291615 A CN 112291615A CN 202011199928 A CN202011199928 A CN 202011199928A CN 112291615 A CN112291615 A CN 112291615A
Authority
CN
China
Prior art keywords
audio data
sound
target
track control
sound track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011199928.9A
Other languages
Chinese (zh)
Inventor
陈彦超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011199928.9A priority Critical patent/CN112291615A/en
Publication of CN112291615A publication Critical patent/CN112291615A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8106Monomedia components thereof involving special audio data, e.g. different tracks for different languages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

The application discloses an audio output method and an audio output device, and belongs to the technical field of data processing. The audio output method includes: displaying at least one sound track control in a case where the initial audio data is output; receiving a first input to a target sound track control; the target sound track control comprises at least one of the at least one sound track control; responding to the first input, and editing target data in the initial audio data to obtain processed initial audio data; the target data is audio data associated with a target sound source object; the target sound source object is a sound source object corresponding to the target sound track control; and outputting the processed initial audio data. According to the method and the device, various editing processes are carried out on the output audio data through the sound track control in the audio data output process, the adjusting mode of the audio data in the audio output process is enriched, and more user requirements can be met.

Description

Audio output method and audio output device
Technical Field
The application belongs to the technical field of data processing, and particularly relates to an audio output method and an audio output device.
Background
With the development of terminal technology and the demand of life entertainment, audio and video have become the mainstream consumption mode of people. For example, the live broadcast industry and the small video industry which are relatively explosive at present provide audio and video for people to consume and entertain.
When people watch live broadcast, small video or other types of audio and video resources, the information in the audio and video resources is obtained through pictures and sounds. In the process, people can adjust the output of the audio and video according to the requirements of the people.
However, the user can only adjust the volume of the output according to the sound part in the audio-video resource. Therefore, the problem that the adjustment mode is single and more user requirements cannot be met exists.
Disclosure of Invention
The embodiment of the application aims to provide an audio output method and an audio output device, and the problems that an existing audio output scheme is single in adjustment mode and cannot meet more user requirements can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an audio output method, where the audio output method includes:
displaying at least one sound track in a case where the initial audio data is output; wherein each sound track corresponds to a sound source object in the initial audio data;
receiving a first input to a target sound track; wherein the target sound track comprises at least one of the at least one sound track;
responding to the first input, and performing editing operation on target data in the initial audio data to obtain processed initial audio data; wherein the target data is audio data associated with a target sound source object; the target sound source object is a sound source object corresponding to the target sound track;
outputting the processed initial audio data.
In a second aspect, an embodiment of the present application provides an audio output apparatus, where the audio output apparatus includes:
a processing module for displaying at least one sound track in case of outputting the initial audio data; wherein each sound track corresponds to a sound source object in the initial audio data;
a first receiving module for receiving a first input to a target sound track; wherein the target sound track comprises at least one of the at least one sound track;
the first response module is used for responding to the first input and carrying out editing operation on target data in the initial audio data to obtain processed initial audio data; wherein the target data is audio data associated with a target sound source object; the target sound source object is a sound source object corresponding to the target sound track;
and the output module is used for outputting the processed initial audio data.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the audio output method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the audio output method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the audio output method according to the first aspect.
In an embodiment of the present application, in a case where initial audio data is output, at least one sound track control is displayed; wherein each sound track control corresponds to a sound source object in the initial audio data. Displaying audio data associated with the sound source object in the initial audio data through a sound track control; the user is facilitated to determine which sound source objects exist in the initial audio data; and then the user can carry out first input on the sound track control, and various editing processes of the audio data associated with the sound source object are realized. By outputting the processed initial audio data, the result of the editing process can be output in time. Not only enriches the adjusting mode of the audio data in the audio output process, but also can meet more user requirements.
Drawings
FIG. 1 is a flow chart illustrating steps of an audio output method according to an embodiment of the present application;
FIG. 2 is a schematic illustration of a sound track display provided by an embodiment of the present application;
fig. 3 is a schematic illustration showing a sound editing control provided by an embodiment of the present application;
FIG. 4 is a flow chart of an actual application of the audio output method provided by the embodiment of the present application;
FIG. 5 is a block diagram of an audio output device according to an embodiment of the present disclosure;
fig. 6 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The audio output method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an audio output method provided in an embodiment of the present application includes:
step 101, displaying at least one sound track control under the condition of outputting initial audio data.
In this step, when outputting the initial audio data, the initial audio data may be played for output. For example, in an electronic device, initial audio data is played by an application (music player, video player, etc.) for playing audio data or video data. Of course, the initial audio data may be output or transmitted between different electronic devices. For example, the first electronic device sends the initial audio data to the second electronic device, that is, the initial audio data is output to the first electronic device.
Preferably, each sound track control corresponds to a sound source object in the initial audio data; here, different sound source objects may be divided according to a preset rule, so that respective corresponding sound track controls are generated according to data portions corresponding to the different sound source objects. Wherein, the sound source object can be noise, background, character, etc. When the audio output method is applied to the electronic equipment, the sound track control is displayed in an area which does not affect the output of the initial audio data in the screen of the electronic equipment. Taking the screen of the electronic device as a flexible screen as an example, referring to fig. 2, the display screen of the electronic device includes: a main screen 21 and a folding screen 22; the initial audio data or the play screen of the multimedia data including the initial audio data is displayed through the main screen 21. If the folding screen 22 is in the idle state, the sound track control is displayed in the folding screen 22 in full screen. If the folding screen 22 is in the non-idle state, the sound track control is displayed half-screen in the folding screen 22, but is not limited thereto. The display mode of the sound track control can also be adjusted according to other states of the folding screen 22. For example, when the folding screen 22 is opened, a sound track control is displayed; when the folded screen 22 is closed, the sound track control disappears. Similarly, when the display screen of the electronic device is changed from a vertical screen to a horizontal screen, the display direction of the sound track control can be flexibly adjusted.
Step 102, receiving a first input to a target sound track control.
In this step, the target sound track control includes at least one of the at least one sound track control. That is, input may be made for only one sound track control; the first input may also be made simultaneously for at least two sound track controls. Here, the first input to the target sound track control may be a single operation directly performed on the target sound track control, such as a click, a slide, a long press, and the like. Or at least two consecutive operations for the target sound track control; for example, the first input is a long press operation plus a slide operation, that is, the slide operation is continued without releasing the press after the long press.
And 103, responding to the first input, and editing the target data in the initial audio data to obtain the processed initial audio data.
In this step, the target data is audio data associated with the target sound source object; the target sound source object is a sound source object corresponding to the target sound track control. Since each sound track control corresponds to a sound source object, each sound source object has its own associated audio data in the original audio data. Thus, each sound track control corresponds to different audio data in the initial audio data. For example, if the initial audio data includes audio data corresponding to noise, background, and character, the audio data corresponding to the initial audio data corresponds to a different sound track control. When the first input is carried out on the sound track control corresponding to the character, the audio data corresponding to the character is edited; the same editing processing is avoided for the audio data corresponding to the background and the noise. That is, only the audio data corresponding to the character in the initial audio data is adjusted.
And 104, outputting the processed initial audio data.
In this step, the unprocessed initial audio data in step 101 is replaced with the processed initial audio data for output. Of course, it is also possible to give the user a decision when to output the processed initial audio data. Preferably, the outputting the processed initial audio data comprises: displaying a preset output control; receiving a fifth input to the preset output control; in response to the fifth input, outputting the processed initial audio data. Here, the processed initial audio data is triggered to be output through a fifth input of the user to the preset output control. When and whether to output the processed initial audio data is selected by the user.
In the embodiment of the application, at least one sound track control is displayed under the condition of outputting initial audio data; wherein each sound track control corresponds to a sound source object in the initial audio data. Displaying audio data associated with the sound source object in the initial audio data through a sound track control; the user is facilitated to determine which sound source objects exist in the initial audio data; and then the user can carry out first input on the sound track control, and various editing processes of the audio data associated with the sound source object are realized. By outputting the processed initial audio data, the result of the editing process can be output in time. Not only enriches the adjusting mode of the audio data in the audio output process, but also can meet more user requirements.
Optionally, in the case of outputting the initial audio data, displaying at least one sound track control, including:
displaying the track control in a case where the initial audio data is output;
in this step, the audio track control is used to control the display of the sound track control.
Receiving a second input to the audio track control;
in this step, the second input may be a click, a slide, a long press, or the like.
In response to a second input, at least one sound track control is displayed.
In this step, each sound track control corresponds to a sound source object in the initial audio data.
In the embodiment of the application, the display of the sound track control is controlled through the sound track control. The user can freely select whether to trigger the audio track control according to the requirement, so that the selectivity of the user is improved; meanwhile, the bad experience brought to the user by directly displaying the sound track control under the condition that the user does not need to edit the initial audio data is avoided.
Optionally, displaying at least one sound track control, comprising:
respectively extracting audio data respectively associated with at least one sound source object in the initial audio data;
in this step, the audio data associated with each sound source object may be extracted according to a rule of dividing different sound source objects, respectively. When different sound source objects are divided by frequency, for example, audio data of different frequency bands are taken as audio data associated with the different sound source objects. Specifically, the time-domain sound signal may be converted into the frequency-domain sound signal, and the audio data associated with the sound source object may be divided according to the frequency band, so as to extract the audio data associated with the sound source object. Of course, different sound source objects can also be divided by timbre, which is not described in detail herein.
Generating a sound track corresponding to each sound source object according to the obtained audio data associated with the at least one sound source object;
in this step, each sound track corresponds to audio data associated with one sound source object.
Sound track controls for controlling the sound tracks, respectively, are displayed.
In the embodiment of the application, audio data associated with a sound source object is extracted to generate a sound track; thus, with the sound track control that controls the sound track, it is possible to realize editing processing of audio data associated with the sound source object.
Optionally, in response to the first input, performing editing processing on target data in the initial audio data to obtain processed initial audio data, including:
responding to the first input, and editing target data in the initial audio data to obtain intermediate data;
synthesizing the intermediate data and the audio data associated with the sound source objects corresponding to the rest of the sound track controls to obtain processed initial audio data;
in this step, the remaining sound track control is a sound track control other than the target sound track control in the at least one sound track control. In the process of displaying the sound track control, the initial audio data is separated into the audio data associated with each sound source object, so before the audio data is output, the edited target data and the audio data associated with the rest sound source objects are synthesized to obtain a synthesized audio file. Here, in the synthesis processing, the synthesis processing may be performed by using a track synthesis technique; and the preset synthesis rule can be adopted for synthesis processing, so that the audio data obtained after synthesis has better auditory effect. For example, a preset weighting algorithm is adopted, and audio data and different timbres associated with different sound source objects corresponding to the sound track control are subjected to synthesis processing, so that a better auditory effect is achieved.
In the embodiment of the application, the target data and the audio data associated with the other sound source objects can be synthesized to obtain a synthesized audio file, so that the output is convenient.
Optionally, in response to the first input, performing editing processing on target data in the initial audio data, including:
displaying at least one sound editing control in response to the first input;
in this step, each sound editing control corresponds to an editing process for the target data. As shown in fig. 3, the sound editing controls include an add control 31, a delete control 32, and a modify control 33; wherein, the adding control 31 is used for adding a new tone; the delete control 32 is used to delete the current timbre; the modification control 33 is used to adjust the parameters of the current timbre. Various editing processes of tone can be realized through the sound editing control, and the purpose of changing sound is achieved. Of course the sound editing controls may also include controls for other editing processes for tone. For example, replacing the control, replacing the current tone with a pre-stored tone; the superposition control is used for carrying out tone superposition on the plurality of tones and synthesizing a new tone; and the separation control is used for carrying out tone separation operation on the current tone. Here, the sound editing control may further include a control for performing an editing process with respect to other characteristics of the sound, such as a control for adjusting a sound size, a control for adjusting a sound frequency, and the like. Preferably, an editing interface can be displayed, and the sound editing control is displayed in the editing interface. Each editing process for the target data may be a single independent operation, or may be at least two associated operations that implement the same function.
Receiving a third input to the target sound editing control;
in this step, the third input may be a click, a slide, a long press, or the like. The target sound editing control is one of the at least one sound editing control;
in response to the third input, target editing processing is performed on the target data.
In this step, the target editing processing is the editing processing corresponding to the target sound editing control. And editing processing corresponding to the target sound editing control, namely, the executable operation which can be triggered by the target sound editing control. With continued reference to fig. 3, when the target sound editing control is the adding control 31, the corresponding editing process is an operation of adding a new tone color. Similarly, when the target sound editing control is the deletion control 32, the corresponding editing process is an operation of deleting the current tone.
In the embodiment of the application, different sound editing controls are displayed for the user to select; therefore, the user can realize different editing processing on the initial audio data according to the requirement of the user.
Optionally, the outputting the initial audio data includes: outputting initial audio data in the process of playing the target type file; wherein the object type file is an audio file or a video file including at least initial audio data.
Here, the object type file may be played through an application installed in the electronic device. Under the condition that the target type file only comprises initial audio data, the target type file is an audio file; in the case where the object type file includes initial audio data and video frames, the object type file is a video file. Specifically, the case of outputting the initial audio data includes: playing audio files (voice, music, etc.) through a music player, outputting generated voice files during a call making process, playing video files (movies, short videos, etc.) through a video player, playing live broadcasts through a live broadcast application, etc.
In the embodiment of the application, the audio data in the audio file or the video file can be edited in the process of playing the audio file or the video file, so that the output audio data can be adjusted, and the requirements of more scenes can be met.
Optionally, the editing process includes: a first editing operation for instructing to adjust an output volume of audio data associated with the target sound source object, a second editing operation for instructing to delete audio data associated with the target sound source object, a third editing operation for instructing to replace audio data associated with the target sound source object, or a fourth editing operation for instructing to adjust a tone color parameter of audio data associated with the target sound source object.
Here, the target sound track control is deleted among the displayed sound track controls at the same time when the second editing operation is performed. For example, during watching some videos that render a terrorist atmosphere through terrorist music, background music (terrorist music) in the videos can be deleted through the second editing operation, so that the level of frightening brought by the terrorist videos is relieved. And when the third editing operation is executed, the displayed sound track control is operated at the same time, namely, the new sound track control is adopted to replace the target sound track control for displaying. For example, during live viewing, the background music in the live data may be replaced with music that the user likes by himself through the third editing operation. When the fourth editing operation is performed, adjustments are made to certain fixed parameters of the timbre. Of course the editing operation may also include other operations on the audio data. Such as adding, deleting, superimposing, separating, etc. the tone. For example, during the process of making and receiving calls, the tone is edited to achieve the purpose of changing voice, so that the other party is supposed to be other person who is talking with the other party.
Optionally, in the case of outputting the initial audio data, after the step of displaying at least one sound track control, the audio output method further includes:
receiving a fourth input to the sound track control;
in response to a fourth input, saving audio data associated with a sound source object corresponding to the sound track control.
In this step, all or part of the audio data associated with the sound source object corresponding to the sound track control may be stored. When audio data associated with each of the plurality of sound source objects is stored, the audio data associated with each of the sound source objects is stored. That is to say that each file saved contains audio data associated with only one sound source object. For example, in the process of playing a video, if a user is interested in the background music in the video file, a fourth input may be performed for the sound track control corresponding to the background music, so as to achieve the purpose of saving the background music.
In the embodiment of the application, the audio data associated with the sound source object can be stored, and the stored audio data is applied to more scenes, so that more user requirements are met.
Optionally, in the case of outputting the initial audio data, after the step of displaying at least one sound track control, the audio output method further includes:
displaying a target control;
receiving a fifth input to the target control;
and responding to the fifth input, and generating and displaying a corresponding sound track control according to the pre-stored audio data.
In this step, the target control is used for adding a sound track control; specifically, according to pre-stored audio data, a sound track control corresponding to the audio data is generated. By operating the sound track control, editing processing of pre-stored audio data can be realized.
In the embodiment of the application, the displayed sound track control can be added according to the pre-stored audio data; therefore, more operations on the initial audio data are realized, and more user requirements are met.
Fig. 4 is a flowchart illustrating an actual application of the audio output method according to the embodiment of the present application; the audio output method is described as applied to an electronic device with a flexible screen, and the flowchart includes:
step 401, playing a small video or watching a live broadcast. I.e. playing the video file via the electronic device.
Step 402, the track projection system is turned on. Here, the user turns on the track projection system through a preset control displayed in the display screen of the electronic device. The track projection system here is an executable code module installed in the electronic device. After the track projection system is turned on, the following step 403 will be performed.
In step 403, the audio track is extracted and projected onto the folded screen. For the audio data in the video file in step 401, extracting the audio data associated with each sound source object of the audio data, and generating a sound track control correspondingly, and displaying the sound track control in a folding screen of the electronic device, that is, the folding screen 22 shown in fig. 2. Here, each sound track control corresponds to different contents of audio data in the video file, for example, different sound track controls are adopted to respectively correspond to background sound, noise and the sound of a person speaking.
Step 404, the user adjusts the sound track control. Here, the user can implement editing processing on the audio data in the video file by adjusting the sound track control. Wherein the adjustment of the sound track control comprises: checking the tone and modifying the tone so as to change the speaking voice of the person to achieve the effect of changing the tone; adjusting the volume; and adding, deleting and saving the sound track control, so that the corresponding audio data is added, deleted and saved.
Step 405, the modified soundtrack is output synthetically. And after the sound track control is adjusted, synthesizing the audio data associated with the sound source object corresponding to the displayed sound track control, and outputting the synthesized audio data. Preferably, the sound track control is adjusted in real time, and the modified audio track is output in real time.
It should be noted that, in the audio output method provided in the embodiment of the present application, the execution main body may be an audio output device, or a control module in the audio output device for executing the audio output method. In the embodiment of the present application, an audio output device executing an audio output method is taken as an example to describe the audio output device provided in the embodiment of the present application.
As shown in fig. 5, an embodiment of the present application further provides an audio output device, including:
a processing module 51 for displaying at least one sound track control in case of outputting the initial audio data; wherein each sound track control corresponds to a sound source object in the initial audio data;
a first receiving module 52, configured to receive a first input to the target sound track control; wherein the target sound track control comprises at least one of the at least one sound track control;
a first response module 53, configured to respond to the first input, perform editing processing on target data in the initial audio data to obtain processed initial audio data; wherein the target data is audio data associated with a target sound source object; the target sound source object is a sound source object corresponding to the target sound track control;
and an output module 54, configured to output the processed initial audio data.
Optionally, the processing module 51 includes:
a display unit for displaying the track control in a case where the initial audio data is output;
a first receiving unit for receiving a second input to the track control;
a first response unit for displaying at least one sound track control in response to the second input.
Optionally, displaying at least one sound track control, comprising:
respectively extracting audio data respectively associated with at least one sound source object in the initial audio data;
generating a sound track corresponding to each sound source object according to the obtained audio data associated with the at least one sound source object;
sound track controls for controlling the sound tracks, respectively, are displayed.
Optionally, the first response module 53 includes:
the second response unit is used for responding to the first input and editing the target data in the initial audio data to obtain intermediate data;
the synthesis processing unit is used for carrying out synthesis processing on the intermediate data and the audio data associated with the sound source object corresponding to the rest sound track control to obtain processed initial audio data; and the rest sound track controls are sound track controls except the target sound track control in the at least one sound track control.
Optionally, in response to the first input, performing editing processing on target data in the initial audio data, including:
displaying at least one sound editing control in response to the first input; each sound editing control corresponds to editing processing aiming at target data;
receiving a third input to the target sound editing control; the target sound editing control is one of the at least one sound editing control;
performing target editing processing on the target data in response to the third input; and the target editing processing is the editing processing corresponding to the target sound editing control.
Optionally, the outputting the initial audio data includes: outputting initial audio data in the process of playing the target type file; wherein the object type file is an audio file or a video file including at least initial audio data.
Optionally, the editing operation includes: a first editing operation for instructing to adjust an output volume of audio data associated with the target sound source object, a second editing operation for instructing to delete audio data associated with the target sound source object, a third editing operation for instructing to replace audio data associated with the target sound source object, or a fourth editing operation for instructing to adjust a tone color parameter of audio data associated with the target sound source object.
Optionally, the audio output device further includes:
a second receiving module, configured to receive a fourth input to the sound track control;
and a second response module for saving audio data associated with the sound source object corresponding to the sound track control in response to the fourth input.
Optionally, the audio output device further includes:
the display module is used for displaying the target control;
the third receiving module is used for receiving a fifth input of the target control;
and the third response module is used for responding to the fifth input and generating and displaying the corresponding sound track control according to the pre-stored audio data.
In the embodiment of the application, at least one sound track control is displayed under the condition of outputting initial audio data; wherein each sound track control corresponds to a sound source object in the initial audio data. Displaying audio data associated with the sound source object in the initial audio data through a sound track control; the user is facilitated to determine which sound source objects exist in the initial audio data; and then the user can carry out first input on the sound track control, and various editing processes of the audio data associated with the sound source object are realized. By outputting the processed initial audio data, the result of the editing process can be output in time. Not only enriches the adjusting mode of the audio data in the audio output process, but also can meet more user requirements.
The audio output device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The audio output device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The audio output device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 6, an electronic device 600 is further provided in this embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the foregoing audio output method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
A display unit 706 for displaying at least one sound track control in a case where the initial audio data is output; wherein each sound track control corresponds to a sound source object in the initial audio data;
a user input unit 707 for receiving a first input to a target sound track control; wherein the target sound track control comprises at least one of the at least one sound track control;
the processor 710 is configured to, in response to a first input, perform editing processing on target data in the initial audio data to obtain processed initial audio data; wherein the target data is audio data associated with a target sound source object; the target sound source object is a sound source object corresponding to the target sound track control;
and an audio output unit 703 for outputting the processed initial audio data.
In the embodiment of the application, at least one sound track control is displayed under the condition of outputting initial audio data; wherein each sound track control corresponds to a sound source object in the initial audio data. Displaying audio data associated with the sound source object in the initial audio data through a sound track control; the user is facilitated to determine which sound source objects exist in the initial audio data; and then the user can carry out first input on the sound track control, and various editing processes of the audio data associated with the sound source object are realized. By outputting the processed initial audio data, the result of the editing process can be output in time. Not only enriches the adjusting mode of the audio data in the audio output process, but also can meet more user requirements.
Optionally, the processor 710 is specifically configured to, in response to the first input, perform editing processing on target data in the initial audio data to obtain intermediate data; synthesizing the intermediate data and the audio data associated with the sound source objects corresponding to the rest of the sound track controls to obtain processed initial audio data; and the rest sound track controls are sound track controls except the target sound track control in the at least one sound track control.
Here, the target data is synthesized with the audio data associated with the remaining sound source objects to obtain a synthesized audio file, thereby facilitating output.
Optionally, the user input unit 707 is further configured to receive a fourth input to the sound track control;
the processor 710 is further configured to save, in response to a fourth input, audio data associated with the sound source object corresponding to the sound track control.
Here, the audio data associated with the sound source object is saved, and the saved audio data is applied to more scenes, so that more user requirements are met.
Optionally, the display unit 706 is further configured to display a target control;
a user input unit 707 further configured to receive a fifth input to the target control;
the processor 710 is further configured to generate a corresponding sound track control according to the pre-stored audio data in response to a fifth input, and the display unit 706 is further configured to display the generated sound track control.
Here, the displayed sound track control is added according to the pre-stored audio data; therefore, more operations on the initial audio data are realized, and more user requirements are met.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned audio output method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned audio output method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. An audio output method, characterized in that the audio output method comprises:
displaying at least one sound track control in a case where the initial audio data is output; wherein each sound track control corresponds to a sound source object in the initial audio data;
receiving a first input to a target sound track control; wherein the target sound track control comprises at least one of the at least one sound track control;
responding to the first input, and editing target data in the initial audio data to obtain processed initial audio data; wherein the target data is audio data associated with a target sound source object; the target sound source object is a sound source object corresponding to the target sound track control;
outputting the processed initial audio data.
2. The audio output method according to claim 1, wherein the displaying at least one sound track control in a case where the initial audio data is output, includes:
displaying the track control in a case where the initial audio data is output;
receiving a second input to the audio track control;
in response to the second input, displaying at least one sound track control.
3. The audio output method according to claim 1 or 2, wherein the displaying at least one sound track control comprises:
respectively extracting audio data which are respectively associated with at least one sound source object in the initial audio data;
generating a sound track corresponding to each sound source object according to the obtained audio data associated with the at least one sound source object;
and displaying sound track controls respectively used for controlling the sound tracks.
4. The audio output method according to claim 3, wherein the editing the target data in the initial audio data in response to the first input to obtain processed initial audio data comprises:
responding to the first input, and editing target data in the initial audio data to obtain intermediate data;
synthesizing the intermediate data and audio data associated with the sound source objects corresponding to the rest of the sound track controls to obtain processed initial audio data; wherein the remaining sound track control is a sound track control of the at least one sound track control other than the target sound track control.
5. The audio output method according to claim 1 or 4, wherein the editing process of the target data in the initial audio data in response to the first input includes:
displaying at least one sound editing control in response to the first input; each sound editing control corresponds to editing processing aiming at the target data;
receiving a third input to the target sound editing control; wherein the target sound editing control is one of the at least one sound editing control;
performing target editing processing on the target data in response to the third input; and the target editing processing is the editing processing corresponding to the target sound editing control.
6. The audio output method according to claim 1, wherein after the step of displaying at least one sound track control in a case where the initial audio data is output, the audio output method further comprises:
receiving a fourth input to the sound track control;
in response to the fourth input, saving audio data associated with a sound source object corresponding to the sound track control.
7. The audio output method according to claim 1, wherein after the step of displaying at least one sound track control in a case where the initial audio data is output, the audio output method further comprises:
displaying a target control;
receiving a fifth input to the target control;
and responding to the fifth input, and generating and displaying a corresponding sound track control according to pre-stored audio data.
8. An audio output apparatus, characterized in that the audio output apparatus comprises:
the processing module is used for displaying at least one sound track control under the condition of outputting initial audio data; wherein each sound track control corresponds to a sound source object in the initial audio data;
a first receiving module for receiving a first input to a target sound track control; wherein the target sound track control comprises at least one of the at least one sound track control;
the first response module is used for responding to the first input and editing target data in the initial audio data to obtain processed initial audio data; wherein the target data is audio data associated with a target sound source object; the target sound source object is a sound source object corresponding to the target sound track control;
and the output module is used for outputting the processed initial audio data.
9. The audio output device of claim 8, wherein the processing module comprises:
a display unit for displaying the track control in a case where the initial audio data is output;
a first receiving unit for receiving a second input to the track control;
a first response unit to display at least one sound track control in response to the second input.
10. The audio output device of claim 8 or 9, wherein the displaying at least one sound track control comprises:
respectively extracting audio data which are respectively associated with at least one sound source object in the initial audio data;
generating a sound track corresponding to each sound source object according to the obtained audio data associated with the at least one sound source object;
and displaying sound track controls respectively used for controlling the sound tracks.
11. The audio output device of claim 10, wherein the first response module comprises:
the second response unit is used for responding to the first input and editing the target data in the initial audio data to obtain intermediate data;
the synthesis processing unit is used for carrying out synthesis processing on the intermediate data and the audio data associated with the sound source objects corresponding to the rest sound track controls to obtain processed initial audio data; wherein the remaining sound track control is a sound track control of the at least one sound track control other than the target sound track control.
12. The audio output apparatus according to claim 8 or 11, wherein the editing process of the target data in the initial audio data in response to the first input includes:
displaying at least one sound editing control in response to the first input; each sound editing control corresponds to editing processing aiming at the target data;
receiving a third input to the target sound editing control; wherein the target sound editing control is one of the at least one sound editing control;
performing target editing processing on the target data in response to the third input; and the target editing processing is the editing processing corresponding to the target sound editing control.
13. The audio output device of claim 8, further comprising:
a second receiving module, configured to receive a fourth input to the sound track control;
a second response module to save audio data associated with a sound source object corresponding to the sound track control in response to the fourth input.
14. The audio output device of claim 8, further comprising:
the display module is used for displaying the target control;
a third receiving module, configured to receive a fifth input to the target control;
and the third response module is used for responding to the fifth input and generating and displaying a corresponding sound track control according to pre-stored audio data.
CN202011199928.9A 2020-10-30 2020-10-30 Audio output method and audio output device Pending CN112291615A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011199928.9A CN112291615A (en) 2020-10-30 2020-10-30 Audio output method and audio output device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011199928.9A CN112291615A (en) 2020-10-30 2020-10-30 Audio output method and audio output device

Publications (1)

Publication Number Publication Date
CN112291615A true CN112291615A (en) 2021-01-29

Family

ID=74353915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011199928.9A Pending CN112291615A (en) 2020-10-30 2020-10-30 Audio output method and audio output device

Country Status (1)

Country Link
CN (1) CN112291615A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113377326A (en) * 2021-06-08 2021-09-10 广州博冠信息科技有限公司 Audio data processing method and device, terminal and storage medium
CN113821189A (en) * 2021-11-25 2021-12-21 广州酷狗计算机科技有限公司 Audio playing method and device, terminal equipment and storage medium
WO2022228174A1 (en) * 2021-04-29 2022-11-03 华为技术有限公司 Rendering method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140294366A1 (en) * 2013-04-01 2014-10-02 Michael-Ryan FLETCHALL Capture, Processing, And Assembly Of Immersive Experience
CN108521603A (en) * 2018-04-20 2018-09-11 深圳市零度智控科技有限公司 DTV and its playback method and computer readable storage medium
CN109074347A (en) * 2016-02-10 2018-12-21 J·葛拉克 Real time content editor with limitation interactivity
CN110989899A (en) * 2019-11-27 2020-04-10 维沃移动通信(杭州)有限公司 Control method and electronic equipment
CN111526242A (en) * 2020-04-30 2020-08-11 维沃移动通信有限公司 Audio processing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140294366A1 (en) * 2013-04-01 2014-10-02 Michael-Ryan FLETCHALL Capture, Processing, And Assembly Of Immersive Experience
CN109074347A (en) * 2016-02-10 2018-12-21 J·葛拉克 Real time content editor with limitation interactivity
CN108521603A (en) * 2018-04-20 2018-09-11 深圳市零度智控科技有限公司 DTV and its playback method and computer readable storage medium
CN110989899A (en) * 2019-11-27 2020-04-10 维沃移动通信(杭州)有限公司 Control method and electronic equipment
CN111526242A (en) * 2020-04-30 2020-08-11 维沃移动通信有限公司 Audio processing method and device and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022228174A1 (en) * 2021-04-29 2022-11-03 华为技术有限公司 Rendering method and related device
CN113377326A (en) * 2021-06-08 2021-09-10 广州博冠信息科技有限公司 Audio data processing method and device, terminal and storage medium
CN113377326B (en) * 2021-06-08 2023-02-03 广州博冠信息科技有限公司 Audio data processing method and device, terminal and storage medium
CN113821189A (en) * 2021-11-25 2021-12-21 广州酷狗计算机科技有限公司 Audio playing method and device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112291615A (en) Audio output method and audio output device
RU2666966C2 (en) Audio playback control method and device
US20180349088A1 (en) Apparatus and Method for Controlling Audio Mixing in Virtual Reality Environments
US20170332020A1 (en) Video generation method, apparatus and terminal
CN112911379B (en) Video generation method, device, electronic equipment and storage medium
CN105117102B (en) Audio interface display methods and device
CN110209879B (en) Video playing method, device, equipment and storage medium
CN112367551A (en) Video editing method and device, electronic equipment and readable storage medium
CN111541945B (en) Video playing progress control method and device, storage medium and electronic equipment
CN112416229A (en) Audio content adjusting method and device and electronic equipment
US11272136B2 (en) Method and device for processing multimedia information, electronic equipment and computer-readable storage medium
CN112394901A (en) Audio output mode adjusting method and device and electronic equipment
CN113115083A (en) Display apparatus and display method
CN114095793A (en) Video playing method and device, computer equipment and storage medium
CN107948756B (en) Video synthesis control method and device and corresponding terminal
CN113407275A (en) Audio editing method, device, equipment and readable storage medium
WO2023169367A1 (en) Audio playing method and electronic device
WO2023284498A1 (en) Video playing method and apparatus, and storage medium
CN115086729B (en) Wheat connecting display method and device, electronic equipment and computer readable medium
CN117119260A (en) Video control processing method and device
CN113365010B (en) Volume adjusting method, device, equipment and storage medium
CN112399238B (en) Video playing method and device and electronic equipment
CN112073803B (en) Sound reproduction method and display device
KR20230120668A (en) Video call method and device
CN114327714A (en) Application program control method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210129