CN106653067B - Information processing method and terminal - Google Patents

Information processing method and terminal Download PDF

Info

Publication number
CN106653067B
CN106653067B CN201510716061.2A CN201510716061A CN106653067B CN 106653067 B CN106653067 B CN 106653067B CN 201510716061 A CN201510716061 A CN 201510716061A CN 106653067 B CN106653067 B CN 106653067B
Authority
CN
China
Prior art keywords
audio data
playing
user
mode
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510716061.2A
Other languages
Chinese (zh)
Other versions
CN106653067A (en
Inventor
樊豫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510716061.2A priority Critical patent/CN106653067B/en
Publication of CN106653067A publication Critical patent/CN106653067A/en
Application granted granted Critical
Publication of CN106653067B publication Critical patent/CN106653067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management Or Editing Of Information On Record Carriers (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses an information processing method and a terminal, wherein the method comprises the following steps: playing N audio data according to a first mode through a first application, wherein N is a positive integer greater than 1; detecting a first user operation, and judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user to obtain a judgment result; when the judgment result is used for selecting one or more audio data as the target object appointed by the user, adding the selected one or more audio data into the target object acquisition area; and when a preset condition is met, entering an audio data enhancement processing mode of the first application, and performing editing processing including clipping on one or more audio data in the target object acquisition area.

Description

Information processing method and terminal
Technical Field
The present invention relates to multimedia information processing technologies, and in particular, to an information processing method and a terminal.
Background
With the intelligent development of the terminal, various personalized life services can be provided for a user by a technical means by installing various applications on the terminal, so that the user can feel one machine is in hand, and no worry is caused. Taking a terminal as a mobile phone as an example, at present, a user does not need to go to KTV about three or two friends to go to K song, only needs to install a K song application on the mobile phone to be networked about the three or two friends to have K song together, and one application can meet the requirements of the user. However, if the user wants to edit the song, the user needs to export the song from the mobile phone terminal and then process the song by using a special data editing application, and the data editing application is too professional, so that the user is not suitable for operation of a common user, and is very inconvenient.
However, in the related art, there is no effective solution to this problem.
Disclosure of Invention
In view of this, embodiments of the present invention are to provide an information processing method and a terminal, which at least solve the problems in the prior art, enable a general user to edit a song quickly, and do not need to introduce a special data editing application.
The technical scheme of the embodiment of the invention is realized as follows:
an information processing method according to an embodiment of the present invention includes:
playing N audio data according to a first mode through a first application, wherein N is a positive integer greater than 1;
detecting a first user operation, and judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user to obtain a judgment result;
when the judgment result is used for selecting one or more audio data as the target object appointed by the user, adding the selected one or more audio data into the target object acquisition area;
and when a preset condition is met, entering an audio data enhancement processing mode of the first application, and performing editing processing including clipping on one or more audio data in the target object acquisition area.
A terminal according to an embodiment of the present invention includes:
the device comprises a playing unit, a processing unit and a processing unit, wherein the playing unit is used for playing N pieces of audio data according to a first mode through a first application, and N is a positive integer greater than 1;
the device comprises a detection unit, a processing unit and a processing unit, wherein the detection unit is used for detecting a first user operation, judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user, and obtaining a judgment result;
the acquisition unit is used for adding one or more selected audio data into the target object acquisition area when the judgment result is used for selecting one or more audio data as the target object appointed by the user;
and the editing processing unit is used for entering an audio data enhancement processing mode of the first application when a preset condition is met and carrying out editing processing comprising clips on one or more audio data in the target object acquisition area.
The information processing method of the embodiment of the invention comprises the following steps: playing N audio data according to a first mode through a first application, wherein N is a positive integer greater than 1; detecting a first user operation, and judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user to obtain a judgment result; when the judgment result is used for selecting one or more audio data as the target object appointed by the user, adding the selected one or more audio data into the target object acquisition area; and when a preset condition is met, entering an audio data enhancement processing mode of the first application, and performing editing processing including clipping on one or more audio data in the target object acquisition area.
By adopting the embodiment of the invention, as the common user can edit the song quickly by adopting the enhanced processing mode of the first application (such as the music playing application) without introducing special data editing application, more problems are solved by using less applications, system resources are saved, the common user can use the song more conveniently, the song editing method can be simply realized without special editing technology, and the song editing method is very convenient and wide in popularization.
Drawings
FIG. 1 is a diagram of hardware entities performing information interaction in an embodiment of the present invention;
FIG. 2 is a flow chart illustrating an implementation of a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a second implementation of the second embodiment of the present invention;
FIG. 4 is a schematic flow chart of a fourth implementation of the present invention;
FIG. 5 is a user interface diagram of an enhanced processing mode according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a fifth embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware component structure according to a sixth embodiment of the present invention.
Detailed Description
The following describes the embodiments in further detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of hardware entities performing information interaction in an embodiment of the present invention, where fig. 1 includes: the system comprises a server 11 and terminal devices 21-24, wherein the terminal devices 21-24 perform information interaction with the server through a wired network or a wireless network, and the terminal devices comprise mobile phones, desktop computers, PCs, all-in-one machines and the like. Based on the system shown in fig. 1, with the embodiment of the present invention, after obtaining audio data from a server by downloading or when downloading audio data in real time, N audio data are played according to a first mode through a first application (e.g., a music playing application), where N is a positive integer greater than 1; detecting a first user operation, and judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user to obtain a judgment result; when the judgment result is used for selecting one or more audio data as the target object appointed by the user, adding the selected one or more audio data into the target object acquisition area; and when a preset condition is met, entering an audio data enhancement processing mode of the first application, and performing editing processing including clipping on one or more audio data in the target object acquisition area. By adopting the embodiment of the invention, as the common user can edit the song quickly by adopting the enhanced processing mode of the first application (such as the music playing application) without introducing special data editing application, more problems are solved by using less applications, system resources are saved, the common user can use the song more conveniently, the song editing method can be simply realized without special editing technology, and the song editing method is very convenient and wide in popularization.
The above example of fig. 1 is only an example of a system architecture for implementing the embodiment of the present invention, and the embodiment of the present invention is not limited to the system architecture described in the above fig. 1, and various embodiments of the present invention are proposed based on the system architecture.
The first embodiment is as follows:
as shown in fig. 2, an information processing method according to an embodiment of the present invention includes:
step 101, playing N audio data according to a first mode through a first application, where N is a positive integer greater than 1.
Here, for example, when the first application is a music playing application, the terminal downloads a plurality of audio data from the server in advance and stores the audio data locally for a scene in which a song is played, or downloads a song from the server in real time and plays the song. The first application is not limited to the music application but may be a video playing application as long as an application capable of audio playing and output performs editing processing including clipping on one or more audio data in the target object capturing area by adding the audio data enhancement processing mode of step 104.
Step 102, detecting a first user operation, and judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user to obtain a judgment result.
Here, still taking the first application in step 101 as an example of a music playing application, for example, it is detected that one song selected by the user in the song list is a red song (referring to a favorite song of the user), or is put into an added user-defined specified song list, so as to obtain one or more songs that the user likes. Or the user can not enter the song list interface, and a click can set a bonus song (which refers to a favorite song of the user) in the process of playing the song in real time or put the bonus song into an added user-defined appointed song list.
And 103, when the judgment result is used for selecting one or more audio data as the target object appointed by the user, adding the selected one or more audio data into the target object acquisition area.
Here, based on the detection in step 102, when it is known that the first user operation is for a song selection operation, but not for other touch operations, it is determined through step 103 that the selected song is audio data that the user designates to be collected, so as to be used for subsequent editing processing such as clipping and synthesis. The songs are first put into the acquisition area in the interface corresponding to the audio data enhancement processing mode.
And 104, entering an audio data enhancement processing mode of the first application when a preset condition is met, and performing editing processing including clipping on one or more audio data in the target object acquisition area.
Here, processing such as clipping and synthesizing is performed on one or more pieces of audio data placed in the capture area in the interface corresponding to the audio data enhancement processing mode in step 103.
Through the step 101 and the step 104, because additional data editing applications do not need to be installed to edit the audio data, and the too professional data editing applications occupy system resources in terms of usability, which leads to various problems such as slowing down of the system processor, only one simple music playing application is adopted, which adds a user-friendly interface to the system, facilitates users to master the audio data enhancement processing function of the editing processing recipe, is very convenient for users to use, and enhances usability, occupies small system resources, and has advantages such as increasing the speed of the system processor.
Here, the audio data enhancement processing function of the simple music playing application is adopted, and the audio data enhancement processing function can be entered at any time at different times during the music playing process, before the playing starts, after the playing ends, and the like, and a plurality of pieces of unnecessary audio data can be edited and synthesized at the same time. There are at least three types of sources of the different audio data, one is the downloaded audio data itself, one is the mixed sound data carried by the system itself or the external data recorded by the user, such as bird song, stream song, or the original sound reproduction of the user's own or friends.
Example two:
as shown in fig. 3, an information processing method according to an embodiment of the present invention includes:
step 201, locally scanning from a terminal to obtain the N audio data, importing the N audio data into a media library corresponding to the first application, and playing the N audio data in any one of the single-track loop mode, the sequential playing mode, and the random playing mode.
Here, the step may also be: downloading the N audio data from a network side in a networked remote search mode, importing the N audio data into a media library corresponding to the first application, and playing the N audio data in any one of the single-song loop mode, the sequential playing mode and the random playing mode
Here, for example, when the first application is a music playing application, the terminal downloads a plurality of audio data from the server in advance and stores the audio data locally for a scene in which a song is played, or downloads a song from the server in real time and plays the song. The first application is not limited to the music application but may be a video playing application as long as an application capable of audio playing and output performs editing processing including clipping on one or more audio data in the target object capturing area by adding the audio data enhancement processing mode of step 104.
Step 202, detecting a first user operation in the process of playing to the ith audio data, and judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user to obtain a judgment result.
Here, i < ═ N and is a positive integer greater than 1; in addition to detecting a user operation when a specific audio data is played, the step can be judged immediately to add the selected song to the acquisition area in real time, for example, when a 10 th song is played, if the user clicks a Hongxing song, the step is regarded as a step of detecting the user operation for selecting the song, and the 10 th song is added to the acquisition area. And then, when the 14 th song is played continuously, a user operation can be detected, and the judgment is carried out immediately so as to add the selected song to the acquisition area in real time, for example, when the 14 th song is played, if the user clicks the Hongxing song, the user operation for selecting the song is detected, and the 14 th song is added to the acquisition area.
In another embodiment, the step may further include: detecting a plurality of first user operations in the process of playing the ith audio data to the jth audio data to select the ith +1 audio data, wherein at least two audio data from the ith +2 audio data to the jth audio data are target objects appointed by the user, and j < equatingto N and being a positive integer greater than 1. For example, this embodiment is: in the process of playing a specific audio data to another specific audio data, continuously detecting a plurality of user operations, and performing batch judgment to add the selected song to the acquisition area in batch, for example, when the 8 th song is played, if the user has an operation of clicking the Hongxing song, the user operation for detecting the selected song is regarded as one user operation, when the 9 th song is played, the user has an operation of clicking the Hongxing song, the user operation for detecting the selected song is regarded as another user operation, when the 13 th song is played, the user operation for clicking the Hongxing song is regarded as another user operation for detecting the selected song, and finally the 8 th song, the 9 th song and the 13 th song are added to the acquisition area in batch at one time. Of course, this is merely an example, and the embodiments of batch judgment in actual operation, and batch addition to the collection area are various.
Here, still taking the first application in step 201 as an example of a music playing application, for example, it is detected that one song selected by the user in the song list is a red song (referring to a favorite song of the user), or is put into an added user-defined specified song list, so as to obtain one or more songs that the user likes. Or the user can not enter the song list interface, and a click can set a bonus song (which refers to a favorite song of the user) in the process of playing the song in real time or put the bonus song into an added user-defined appointed song list.
Step 203, when the judgment result is used for selecting one or more audio data as the target object appointed by the user, adding the selected one or more audio data into the target object acquisition area.
Here, based on the detection in step 202, when it is known that the first user operation is for the song selection operation, but not for other touch operations, it is determined through step 203 that the selected song is the audio data that the user designates to be collected, so as to be used for subsequent editing processing such as clipping and synthesis. The songs are first put into the acquisition area in the interface corresponding to the audio data enhancement processing mode.
And 204, entering an audio data enhancement processing mode of the first application when a preset condition is met, and performing editing processing including clipping on one or more audio data in the target object acquisition area.
Here, the processing such as clipping and synthesizing is performed on one or more pieces of audio data placed in the acquisition area in the interface corresponding to the audio data enhancement processing mode in step 203.
Through the step 201 and the step 204, because additional data editing applications do not need to be installed to edit the audio data, and the too professional data editing applications occupy system resources in terms of usability, which leads to various problems such as slowing down of the system processor, only one simple music playing application is adopted, which adds a user-friendly interface to the system, facilitates users to master the audio data enhancement processing function of editing and processing trick, is very convenient for users to use, and enhances usability, occupies less system resources, and has advantages such as increasing the speed of the system processor. According to the requirements of users, adding the acquisition regions of songs every time or adding the acquisition regions of songs in batches after multiple detections is feasible.
Here, the audio data enhancement processing function of the simple music playing application is adopted, and the audio data enhancement processing function can be entered at any time at different times during the music playing process, before the playing starts, after the playing ends, and the like, and a plurality of pieces of unnecessary audio data can be edited and synthesized at the same time. There are at least three types of sources of the different audio data, one is the downloaded audio data itself, one is the mixed sound data carried by the system itself or the external data recorded by the user, such as bird song, stream song, or the original sound reproduction of the user's own or friends.
Example three:
based on the first to second embodiments, the information processing method according to the embodiment of the present invention, wherein entering the audio data enhancement processing mode of the first application when the preset condition is satisfied includes: the audio data enhancement processing mode of the first application can be entered in the playing process of the audio data, after the playing of the audio data is paused, after the first application is exited, or under the condition that the first application is restarted to enter.
Example four:
based on the above first to third implementations, an information processing method according to an embodiment of the present invention, as shown in fig. 4, includes:
step 301, locally scanning the terminal to obtain the N audio data, importing the N audio data into a media library corresponding to the first application, and playing the N audio data in any one of the single-track loop mode, the sequential playing mode, and the random playing mode.
Here, the step may also be: downloading the N audio data from a network side in a networked remote search mode, importing the N audio data into a media library corresponding to the first application, and playing the N audio data in any one of the single-song loop mode, the sequential playing mode and the random playing mode
Here, for example, when the first application is a music playing application, the terminal downloads a plurality of audio data from the server in advance and stores the audio data locally for a scene in which a song is played, or downloads a song from the server in real time and plays the song. The first application is not limited to the music application but may be a video playing application as long as an application capable of audio playing and output performs editing processing including clipping on one or more audio data in the target object capturing area by adding the audio data enhancement processing mode of step 104.
Step 302, detecting a first user operation in the process of playing to the ith audio data, and judging whether the first user operation is used for selecting one or more audio data as a target object specified by a user to obtain a judgment result.
Here, i < ═ N and is a positive integer greater than 1; in addition to detecting a user operation when a specific audio data is played, the step can be judged immediately to add the selected song to the acquisition area in real time, for example, when a 10 th song is played, if the user clicks a Hongxing song, the step is regarded as a step of detecting the user operation for selecting the song, and the 10 th song is added to the acquisition area. And then, when the 14 th song is played continuously, a user operation can be detected, and the judgment is carried out immediately so as to add the selected song to the acquisition area in real time, for example, when the 14 th song is played, if the user clicks the Hongxing song, the user operation for selecting the song is detected, and the 14 th song is added to the acquisition area.
In another embodiment, the step may further include: detecting a plurality of first user operations in the process of playing the ith audio data to the jth audio data to select the ith +1 audio data, wherein at least two audio data from the ith +2 audio data to the jth audio data are target objects appointed by the user, and j < equatingto N and being a positive integer greater than 1. For example, this embodiment is: in the process of playing a specific audio data to another specific audio data, continuously detecting a plurality of user operations, and performing batch judgment to add the selected song to the acquisition area in batch, for example, when the 8 th song is played, if the user has an operation of clicking the Hongxing song, the user operation for detecting the selected song is regarded as one user operation, when the 9 th song is played, the user has an operation of clicking the Hongxing song, the user operation for detecting the selected song is regarded as another user operation, when the 13 th song is played, the user operation for clicking the Hongxing song is regarded as another user operation for detecting the selected song, and finally the 8 th song, the 9 th song and the 13 th song are added to the acquisition area in batch at one time. Of course, this is merely an example, and the embodiments of batch judgment in actual operation, and batch addition to the collection area are various.
Here, still taking the first application in step 301 as an example of a music playing application, for example, it is detected that one song selected by the user in the song list is a red song (referring to a favorite song of the user), or the song is put into an added user-defined specified song list, so as to obtain one or more songs that the user likes. Or the user can not enter the song list interface, and a click can set a bonus song (which refers to a favorite song of the user) in the process of playing the song in real time or put the bonus song into an added user-defined appointed song list.
And 303, when the judgment result is used for selecting one or more audio data as the target object appointed by the user, adding the selected one or more audio data into the target object acquisition area.
Here, based on the detection in step 302, when it is known that the first user operation is for the song selection operation, but not for other touch operations, it is determined through step 303 that the selected song is the audio data that the user designates to be collected, so as to be used for subsequent editing processing such as clipping and synthesis. The songs are first put into the acquisition area in the interface corresponding to the audio data enhancement processing mode.
And 304, entering an audio data enhancement processing mode of the first application when a preset condition is met, and acquiring a second user operation acting on the mth audio data in the acquisition area.
Step 305, in response to the second user operation, adjusting a time axis of the mth audio data and/or the m + nth audio data to obtain at least one first time point and at least one second time point, taking the first time point as a truncation start point for truncating a part of the mth audio data and/or the m + nth audio data, and taking the first time point as a truncation end point for truncating the part of the mth audio data and/or the m + nth audio data; the m is a positive integer larger than 1, and the n is a positive integer of 1, … n.
Here, in the case of one audio data, regardless of the order, the audio data is arbitrarily selected from the collection area, for example, after the time axis of the mth audio data or the mth + nth audio data is adjusted to be clipped, one audio data and other data sources, such as user recording or mixing data, may be combined with each other to be synthesized into final target data. For the case of multiple audio data, the order may also be not considered, and multiple audio data may be arbitrarily selected from the acquisition area, for example, after the time axes of the mth audio data and the mth + nth audio data are adjusted to be clipped, the multiple audio data may be synthesized into target data, or any number of the multiple audio data may be combined with other data sources, such as user recording or mixing data, to be synthesized into final target data.
The clipping process is performed on one audio data or on a plurality of audio data, and the principle is the same, as described in step 305, which is not described in detail.
And step 306, acquiring a third user operation after the interception starting point and the interception ending point are set.
Step 307, responding to the third user operation, double-clicking the time axis according to the interception start point and the interception end point or dragging the part of the audio clips to a synthesis area to complete the clipping processing.
Here, the synthesis area and the capture area are located in the same processing interface in the audio data enhancement processing mode, and the processing such as clipping and synthesizing may be performed on one or more audio data in the capture area placed in the interface corresponding to the audio data enhancement processing mode in step 303.
Through the step 301 plus 307, because additional data editing applications do not need to be installed to edit the audio data, and the too professional data editing applications occupy system resources in terms of usability, resulting in various problems such as slowing down of the system processor, etc., only one simple music playing application is adopted, which adds a user-friendly interface to the system, facilitates users to master the audio data enhancement processing function of editing and processing trick, is very convenient for users to use, and enhances usability, occupies small system resources, and has advantages such as increasing the speed of the system processor. According to the requirements of users, adding the acquisition regions of songs every time or adding the acquisition regions of songs in batches after multiple detections is feasible.
Here, the audio data enhancement processing function of the simple music playing application is adopted, and the audio data enhancement processing function can be entered at any time at different times during the music playing process, before the playing starts, after the playing ends, and the like, and a plurality of pieces of unnecessary audio data can be edited and synthesized at the same time. There are at least three types of sources of the different audio data, one is the downloaded audio data itself, one is the mixed sound data carried by the system itself or the external data recorded by the user, such as bird song, stream song, or the original sound reproduction of the user's own or friends.
Based on the first to fourth embodiments, the information processing method according to the embodiment of the present invention further includes an operation of performing data synthesis on the clipped data, as follows:
step 401, obtaining a plurality of to-be-processed partitions of the composition area, where each to-be-processed partition is used to store one piece of audio data, where the audio data includes: the mth audio data and/or the m + nth audio data.
Step 402, performing adjustment operations including deletion and adjustment of an arrangement sequence on the plurality of to-be-processed partitions according to an adjustment strategy to obtain at least one target to-be-processed partition.
Step 403, after detecting that the adjustment operation is finished, extracting one or audio data located in at least one target to-be-processed partition.
And step 404, performing data synthesis on the data obtained by clipping. For example, if there is one audio data, adding sound effect data for mixing sound and/or audio data recorded by the user is supported, and synthesizing a plurality of data into a first target data; for example, if there are a plurality of audio data, the plurality of audio data are synthesized into a second target data; or, adding sound effect data for mixing sound and/or audio data recorded by the user himself is supported, and a plurality of data are synthesized into a third target data.
Here, there are at least three data sources, audio data of the acquisition area, sound effects of the mixing sound, and the sound recorded by the user himself. In step 404, the final synthesis results of the first target data, the second target data, and the third target data may be obtained by synthesizing according to the respective combination methods.
Fig. 5 is a schematic diagram illustrating how the capture area performs data editing using a time axis and how data is synthesized in the synthesis area in the audio data enhancement processing mode of the first application itself, and as shown in fig. 5, the functional interface of the audio data enhancement processing mode includes the capture area and the synthesis area located below the capture area. 1) First, the time axis within the acquisition region is adjusted, as shown by the 2 highlighted marker points on the song time axis in FIG. 5, for example, the cut start and cut end points in the "redundant explanation" song. And after the interception is finished, double-clicking a time axis or dragging the intercepted part of the audio clips of the songs corresponding to the 'redundant explanation' to a synthesis area, wherein the part of the audio clips is presented in a small box shape in the synthesis area. 2) And (3) intercepting partial audio clips of the rest 2 songs according to the step 1) respectively according to the needs of the user, and placing the audio clips in small boxes of the synthesis area respectively. 3) And after all the editing operations are finished, seeing whether the sequence of the small boxes meets the requirements of a user, if the sequence of part of the audio segments in the small boxes does not meet the requirements of the user, adjusting the placing sequence of the small boxes in the synthesis area, and clicking a deleting button to delete the time area which is not needed. And finally, synthesizing a plurality of partial audio segments which are edited according to the requirements of the user and arranged according to the time region according to the small box arranged in the synthesis region, and clicking a finishing editing button to finish editing and synthesizing operation.
Example five:
as shown in fig. 6, a terminal according to an embodiment of the present invention includes:
a playing unit 11, configured to play N audio data according to a first mode through a first application, where N is a positive integer greater than 1;
the detection unit 12 is configured to detect a first user operation, determine whether the first user operation is used to select one or more audio data as a target object specified by a user, and obtain a determination result;
the acquisition unit 13 is configured to add the selected one or more audio data into the target object acquisition area when the determination result is used to select the one or more audio data as the target object specified by the user;
and the editing processing unit 14 is configured to enter an audio data enhancement processing mode of the first application when a preset condition is met, and perform editing processing including clipping on one or more pieces of audio data in the target object acquisition area.
Here, for example, when the first application is a music playing application, the terminal downloads a plurality of audio data from the server in advance and stores the audio data locally for a scene in which a song is played, or downloads a song from the server in real time and plays the song. The first application is not limited to a music application, and may be a video playing application, as long as the application capable of playing and outputting audio can be used, and editing processing including clipping may be performed on one or more audio data in the target object acquisition area by adding an audio data enhancement processing mode corresponding to the first application.
Specifically, when it is detected that the first user operation is a song selection operation, but not other touch operations, it is determined that the selected song is audio data that is specified by the user to be acquired, so as to be used for subsequent editing processing such as editing and synthesizing. And detecting that one song selected by the user in the song list is a red-heart song (which means the favorite song of the user) or putting the red-heart song into the added user-defined appointed song list to obtain one or more favorite songs selected by the user. Or the user can not enter the song list interface, and a click can set a bonus song (which refers to a favorite song of the user) in the process of playing the song in real time or put the bonus song into an added user-defined appointed song list. Therefore, the songs are firstly put into the acquisition area in the interface corresponding to the audio data enhancement processing mode for subsequent editing processing such as editing, composition and the like. And then, carrying out processing such as clipping, synthesizing and the like on one or more pieces of audio data placed in the acquisition area in the interface corresponding to the audio data enhancement processing mode.
In an embodiment of the present invention, the first mode includes: at least one of a single-track loop mode, a sequential play mode and a random play mode; the playback unit is further configured to: locally scanning from a terminal to obtain the N audio data, importing the N audio data into a media library corresponding to the first application, and playing the N audio data in any one of the single-song circulation mode, the sequential playing mode and the random playing mode; or, the N audio data are downloaded from a network side in a networked remote search manner, and are imported into a media library corresponding to the first application, and the N audio data are played in any one of the single-track loop mode, the sequential playing mode and the random playing mode.
In an implementation manner of the embodiment of the present invention, the detecting unit is further configured to: detecting one first user operation in the process of playing to ith audio data to select the ith audio data as a target object specified by the user, wherein i is equal to N and is a positive integer greater than 1; or detecting a plurality of first user operations in the process of playing the ith audio data to the jth audio data to select the ith +1 audio data, wherein at least two audio data from the ith +2 audio data to the jth audio data are target objects designated by the user, and j < ═ N and is a positive integer greater than 1.
In an embodiment of the present invention, the editing processing unit is further configured to: the audio data enhancement processing mode of the first application can be entered in the playing process of the audio data, after the playing of the audio data is paused, after the first application is exited, or under the condition that the first application is restarted to enter.
In an embodiment of the present invention, the editing processing unit is further configured to: acquiring a second user operation acting on the mth audio data in the acquisition area; responding to the second user operation, adjusting the time axis of the mth audio data and/or the (m + n) th audio data to obtain at least one first time point and at least one second time point, taking the first time point as a truncation starting point for truncating a part of audio clips from the mth audio data and/or the (m + n) th audio data, and taking the first time point as a truncation ending point for truncating the part of audio clips from the mth audio data and/or the (m + n) th audio data; m is a positive integer greater than 1, n is a positive integer of 1, … n; acquiring a third user operation after the interception starting point and the interception ending point are set; responding to the third user operation, double-clicking the time axis or dragging the part of the audio clips to a synthesis area according to the interception starting point and the interception ending point so as to finish clipping processing; the synthesis area and the acquisition area are positioned in the same processing interface in the audio data enhancement processing mode.
In an embodiment of the present invention, the editing processing unit is further configured to: acquiring a plurality of to-be-processed partitions of the synthesis area, wherein each to-be-processed partition is used for storing audio data, and the audio data comprises: the mth audio data and/or the m + nth audio data; adjusting the plurality of to-be-processed partitions according to an adjusting strategy, wherein the adjusting operation comprises deleting and adjusting the sequence of arrangement to obtain at least one target to-be-processed partition; after the adjustment operation is detected to be finished, extracting one or audio data in at least one target to-be-processed partition; if the number of the audio data is one, adding sound effect data for sound mixing and/or audio data recorded by a user is supported, and synthesizing a plurality of data into a first target data; if the audio data are multiple, synthesizing the multiple audio data into a second target data; or, adding sound effect data for mixing sound and/or audio data recorded by the user himself is supported, and a plurality of data are synthesized into a third target data.
It should be noted that the terminal may be an electronic device such as a PC, a portable electronic device such as a PAD, a tablet computer, a laptop computer, or an intelligent mobile terminal such as a mobile phone, and is not limited to the description herein; the server may be an electronic device formed by a cluster system, and integrated into one or a plurality of unit functions to implement the unit functions, and both the client and the server at least include a database for storing data and a processor for data processing, or include a storage medium arranged in the server or a storage medium arranged independently.
As for the processor for data Processing, when executing Processing, the processor can be implemented by a microprocessor, a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or a Programmable logic Array (FPGA); for the storage medium, the storage medium contains operation instructions, which may be computer executable codes, and the operation instructions implement the steps in the flow of the information processing method according to the above-described embodiment of the present invention.
Example six:
an example of this terminal as hardware entity S11 is shown in fig. 7. The apparatus comprises a processor 31, a storage medium 32 and at least one external communication interface 33; the processor 31, the storage medium 32, and the external communication interface 33 are all connected by a bus 34.
Here, it should be noted that: the above description related to the terminal and the server items is similar to the above description of the method, and the description of the beneficial effects of the same method is omitted for brevity. For technical details that are not disclosed in the embodiments of the terminal and the server of the present invention, refer to the description of the embodiments of the method of the present invention.
The embodiment of the invention is explained by taking a practical application scene as an example as follows:
the application scenario adopts the embodiment of the invention, and if the mobile terminal takes a mobile phone as an example, the mobile terminal can be specifically a scheme for adding a DIY music editing function based on music software of a smart phone terminal, so that the music playing application of the mobile phone terminal has a DIY music synthesizing function. With the addition of the DIY composition music function to this music playback application, editing (cropping) and composition of various audio sources is possible. Through the enhanced DIY music editing function, the user can conveniently and quickly customize music in a user-customized manner, and edit favorite music, and unlimited possibility is created, and a more quick and flexible music editing, cutting and synthesizing function entrance except a PC end is provided for the user. The music playing application at the mobile phone end not only has basic playing functions including playing, pausing, fast forwarding and fast rewinding functions, but also can be edited, and special data editing processing software is not required to be additionally added. If the data needs to be edited, special data editing processing software can be used only at the PC end or additionally added at the mobile phone end.
Still referring to fig. 5, when the user plays music, a particularly preferred tempo or piece may be edited and produced in a manner similar to the following DIY, and the user interface is provided with a music piece collecting area and a music piece collecting area, and includes the following steps:
step 501, in the music piece collecting area, determining the music piece to be collected by moving a starting point cursor and an ending cursor in the collecting area, and finely adjusting the starting point and the ending point according to the following left and right progress.
Step 502, in the music editing and synthesizing area, the editing area is sequentially provided with collecting and containing boxes, and the number of the boxes can be increased, deleted or the positions of the interaction boxes can be randomly arranged. After the music clips are collected and determined, the collected music clips can be sequentially placed into the containing boxes through dragging or double-clicking operation to the editing area, and the containing boxes can be reordered through dragging.
And 503, saving after the production is finished, clicking a saving button to save the edited and synthesized music, and providing a file name editing and saving mode.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (13)

1. An information processing method, characterized in that the method comprises:
playing N audio data according to a first mode through a first application for audio or video playing, wherein N is a positive integer greater than 1;
detecting a first user operation, and judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user to obtain a judgment result;
when the judgment result is used for selecting one or more audio data as the target object appointed by the user, adding the selected one or more audio data into the target object acquisition area;
entering an audio data enhancement processing mode of the first application different from the first mode for playing the audio data when a preset condition is met;
adjusting a time axis of the mth audio data and/or the m + nth audio data to obtain at least one first time point and at least one second time point;
taking the first time point as a truncation starting point for truncating a partial audio segment from the mth audio data and/or the m + nth audio data, and taking the second time point as a truncation ending point for truncating the partial audio segment from the mth audio data and/or the m + nth audio data; m is a positive integer greater than 1, n is a positive integer of 1, … n;
and adding the part of the audio segments into a synthesis area according to the interception starting point and the interception ending point so as to finish the clipping processing.
2. The method of claim 1, wherein the first mode comprises: at least one of a single-track loop mode, a sequential play mode and a random play mode;
the playing of the N audio data by the first application in the first mode includes:
locally scanning from a terminal to obtain the N audio data, importing the N audio data into a media library corresponding to the first application, and playing the N audio data in any one of the single-song circulation mode, the sequential playing mode and the random playing mode;
or, the N audio data are downloaded from a network side in a networked remote search manner, and are imported into a media library corresponding to the first application, and the N audio data are played in any one of the single-track loop mode, the sequential playing mode and the random playing mode.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
detecting one first user operation in the process of playing to ith audio data to select the ith audio data as a target object specified by the user, wherein i is equal to N and is a positive integer greater than 1;
alternatively, the first and second electrodes may be,
detecting a plurality of first user operations in the process of playing the ith audio data to the jth audio data to select the ith +1 audio data, wherein at least two audio data from the ith +2 audio data to the jth audio data are target objects appointed by the user, and j < equatingto N and being a positive integer greater than 1.
4. The method according to claim 1 or 2, wherein entering the audio data enhancement processing mode of the first application itself when the preset condition is met comprises: the audio data enhancement processing mode of the first application can be entered in the playing process of the audio data, after the playing of the audio data is paused, after the first application is exited, or under the condition that the first application is restarted to enter.
5. The method according to claim 1 or 2,
the adjusting the time axis of the mth audio data and/or the m + nth audio data to obtain at least one first time point and at least one second time point includes:
acquiring a second user operation acting on the mth audio data and/or the m + nth audio data in the acquisition area;
adjusting a time axis of the mth audio data and/or the m + nth audio data in response to the second user operation to obtain the at least one first time point and the at least one second time point;
the adding the part of the audio segment into a synthesis area according to the interception starting point and the interception ending point to complete the clipping process comprises:
acquiring a third user operation after the interception starting point and the interception ending point are set;
responding to the third user operation, double-clicking the time axis or dragging the part of the audio clips to a synthesis area according to the interception starting point and the interception ending point so as to finish clipping processing;
the synthesis area and the acquisition area are positioned in the same processing interface in the audio data enhancement processing mode.
6. The method of claim 5, further comprising:
acquiring a plurality of to-be-processed partitions of the synthesis area, wherein each to-be-processed partition is used for storing audio data, and the audio data comprises: the mth audio data and/or the m + nth audio data;
adjusting the plurality of to-be-processed partitions according to an adjusting strategy, wherein the adjusting operation comprises deleting and adjusting the sequence of arrangement to obtain at least one target to-be-processed partition;
after the adjustment operation is detected to be finished, extracting one or more audio data in the at least one target to-be-processed partition;
when the number of the audio data is one, adding sound effect data for sound mixing and/or audio data recorded by a user is supported, and a plurality of data are synthesized into a first target data;
when the audio data are multiple, synthesizing the multiple audio data into a second target data; or, adding sound effect data for mixing sound and/or audio data recorded by the user himself is supported, and a plurality of data are synthesized into a third target data.
7. A terminal, characterized in that the terminal comprises:
the device comprises a playing unit, a processing unit and a processing unit, wherein the playing unit is used for playing N pieces of audio data according to a first mode through a first application for audio or video playing, and N is a positive integer greater than 1;
the device comprises a detection unit, a processing unit and a processing unit, wherein the detection unit is used for detecting a first user operation, judging whether the first user operation is used for selecting one or more audio data as a target object appointed by a user, and obtaining a judgment result;
the acquisition unit is used for adding one or more selected audio data into the target object acquisition area when the judgment result is used for selecting one or more audio data as the target object appointed by the user;
the editing processing unit is used for entering an audio data enhancement processing mode of the first application, which is different from the first mode for playing the audio data, when a preset condition is met;
adjusting a time axis of the mth audio data and/or the m + nth audio data to obtain at least one first time point and at least one second time point;
taking the first time point as a truncation starting point for truncating a partial audio segment from the mth audio data and/or the m + nth audio data, and taking the second time point as a truncation ending point for truncating the partial audio segment from the mth audio data and/or the m + nth audio data; m is a positive integer greater than 1, n is a positive integer of 1, … n;
and adding the part of the audio segments into a synthesis area according to the interception starting point and the interception ending point so as to finish the clipping processing.
8. The terminal of claim 7, wherein the first mode comprises: at least one of a single-track loop mode, a sequential play mode and a random play mode;
the playback unit is further configured to:
locally scanning from a terminal to obtain the N audio data, importing the N audio data into a media library corresponding to the first application, and playing the N audio data in any one of the single-song circulation mode, the sequential playing mode and the random playing mode;
or, the N audio data are downloaded from a network side in a networked remote search manner, and are imported into a media library corresponding to the first application, and the N audio data are played in any one of the single-track loop mode, the sequential playing mode and the random playing mode.
9. The terminal according to claim 7 or 8, wherein the detecting unit is further configured to:
detecting one first user operation in the process of playing to ith audio data to select the ith audio data as a target object specified by the user, wherein i is equal to N and is a positive integer greater than 1;
alternatively, the first and second electrodes may be,
detecting a plurality of first user operations in the process of playing the ith audio data to the jth audio data to select the ith +1 audio data, wherein at least two audio data from the ith +2 audio data to the jth audio data are target objects appointed by the user, and j < equatingto N and being a positive integer greater than 1.
10. The terminal according to claim 7 or 8, wherein the editing processing unit is further configured to: the audio data enhancement processing mode of the first application can be entered in the playing process of the audio data, after the playing of the audio data is paused, after the first application is exited, or under the condition that the first application is restarted to enter.
11. The terminal according to claim 7 or 8, wherein the editing processing unit is further configured to:
acquiring a second user operation acting on the mth audio data and/or the m + nth audio data in the acquisition area;
adjusting a time axis of the mth audio data and/or the m + nth audio data in response to the second user operation to obtain the at least one first time point and the at least one second time point;
acquiring a third user operation after the interception starting point and the interception ending point are set;
responding to the third user operation, double-clicking the time axis or dragging the part of the audio clips to a synthesis area according to the interception starting point and the interception ending point so as to finish clipping processing;
the synthesis area and the acquisition area are positioned in the same processing interface in the audio data enhancement processing mode.
12. The terminal of claim 11, wherein the editing processing unit is further configured to:
acquiring a plurality of to-be-processed partitions of the synthesis area, wherein each to-be-processed partition is used for storing audio data, and the audio data comprises: the mth audio data and/or the m + nth audio data;
adjusting the plurality of to-be-processed partitions according to an adjusting strategy, wherein the adjusting operation comprises deleting and adjusting the sequence of arrangement to obtain at least one target to-be-processed partition;
after the adjustment operation is detected to be finished, extracting one or more audio data in the at least one target to-be-processed partition;
when the number of the audio data is one, adding sound effect data for sound mixing and/or audio data recorded by a user is supported, and a plurality of data are synthesized into a first target data;
when the audio data are multiple, synthesizing the multiple audio data into a second target data; or, adding sound effect data for mixing sound and/or audio data recorded by the user himself is supported, and a plurality of data are synthesized into a third target data.
13. A computer-readable storage medium having stored thereon executable instructions that, when executed, implement the steps of the method of any one of claims 1 to 6.
CN201510716061.2A 2015-10-28 2015-10-28 Information processing method and terminal Active CN106653067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510716061.2A CN106653067B (en) 2015-10-28 2015-10-28 Information processing method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510716061.2A CN106653067B (en) 2015-10-28 2015-10-28 Information processing method and terminal

Publications (2)

Publication Number Publication Date
CN106653067A CN106653067A (en) 2017-05-10
CN106653067B true CN106653067B (en) 2020-03-17

Family

ID=58830816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510716061.2A Active CN106653067B (en) 2015-10-28 2015-10-28 Information processing method and terminal

Country Status (1)

Country Link
CN (1) CN106653067B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107911342A (en) * 2017-10-27 2018-04-13 北京雷客天地科技有限公司 A kind of intelligence carousel method and system
CN109147745B (en) * 2018-07-25 2020-03-10 北京达佳互联信息技术有限公司 Song editing processing method and device, electronic equipment and storage medium
CN109166596A (en) * 2018-08-10 2019-01-08 北京微播视界科技有限公司 Music editor's method, apparatus, terminal device and computer readable storage medium
CN114157894B (en) * 2021-11-30 2024-01-30 北京中联合超高清协同技术中心有限公司 Audio rebroadcasting method and audio rebroadcasting system supporting panoramic sound

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1779777A (en) * 2005-08-16 2006-05-31 深圳市彩秀科技有限公司 Audio-frequency editing and converting method by cutting audio-frequency wave form
CN1909057A (en) * 2005-08-07 2007-02-07 黄金富 Portable data processing device with karaoke function and melody selecting method
CN101996667A (en) * 2009-08-10 2011-03-30 鸿富锦精密工业(深圳)有限公司 Method for playing audio file in electronic device
CN102568527A (en) * 2011-12-20 2012-07-11 广东步步高电子工业有限公司 Method and system for easily cutting audio files and applied mobile handheld device
CN103336686A (en) * 2013-06-05 2013-10-02 福建星网视易信息***有限公司 Editing device and editing method for terminal playing template of digital signage system
CN103700292A (en) * 2013-12-25 2014-04-02 广州鸿根信息科技有限公司 System for making teaching video
CN104835520A (en) * 2015-03-27 2015-08-12 广州荔支网络技术有限公司 Mobile equipment recording method and device
CN105094802A (en) * 2015-06-15 2015-11-25 联想(北京)有限公司 Information processing method and electronic equipment
CN105867737A (en) * 2016-03-28 2016-08-17 珠海格力电器股份有限公司 Application-program processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102959544B (en) * 2010-05-04 2016-06-08 沙扎姆娱乐有限公司 For the method and system of synchronized multimedia

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1909057A (en) * 2005-08-07 2007-02-07 黄金富 Portable data processing device with karaoke function and melody selecting method
CN1779777A (en) * 2005-08-16 2006-05-31 深圳市彩秀科技有限公司 Audio-frequency editing and converting method by cutting audio-frequency wave form
CN101996667A (en) * 2009-08-10 2011-03-30 鸿富锦精密工业(深圳)有限公司 Method for playing audio file in electronic device
CN102568527A (en) * 2011-12-20 2012-07-11 广东步步高电子工业有限公司 Method and system for easily cutting audio files and applied mobile handheld device
CN103336686A (en) * 2013-06-05 2013-10-02 福建星网视易信息***有限公司 Editing device and editing method for terminal playing template of digital signage system
CN103700292A (en) * 2013-12-25 2014-04-02 广州鸿根信息科技有限公司 System for making teaching video
CN104835520A (en) * 2015-03-27 2015-08-12 广州荔支网络技术有限公司 Mobile equipment recording method and device
CN105094802A (en) * 2015-06-15 2015-11-25 联想(北京)有限公司 Information processing method and electronic equipment
CN105867737A (en) * 2016-03-28 2016-08-17 珠海格力电器股份有限公司 Application-program processing method and device

Also Published As

Publication number Publication date
CN106653067A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106095595B (en) Information sharing method and terminal between a kind of application program
US7730414B2 (en) Graphical display
CN103733197B (en) The management of local and remote media item
CN106653067B (en) Information processing method and terminal
WO2018076174A1 (en) Multimedia editing method and device, and smart terminal
CN106468987B (en) Information processing method and client
JP2020515124A (en) Method and apparatus for processing multimedia resources
CN107241511B (en) Music playing method, device, storage medium and terminal
CN109729372A (en) Method for switching between, device, terminal, server and storage medium is broadcast live
EP2811399B1 (en) Method and terminal for starting music application
CN111526427B (en) Video generation method and device and electronic equipment
CN107872685A (en) A kind of player method of multi-medium data, device and computer installation
CN111966860A (en) Audio playing method and device and electronic equipment
CN109600643A (en) Video providing method, playback method, device and storage medium
CN115103232B (en) Video playing method, device, equipment and storage medium
CN113918522A (en) File generation method and device and electronic equipment
CN112256233A (en) Music playing method and device
CN107220309A (en) Obtain the method and device of multimedia file
CN109873905A (en) Audio frequency playing method, audio synthetic method, device and storage medium
CN104462276B (en) A kind of audio frequency playing method and device for desktop widget
CN112286421A (en) Playlist processing method and device and electronic equipment
JP4646249B2 (en) Program recording medium, portable video game machine, playback control program, and playback control method
US20180336277A1 (en) Managing Media Collections Using Directed Acyclic Graphs
JP2009301478A (en) Similar play list retrieving method, program and apparatus
CN115190367A (en) Video playing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant