CN115346503A - Song creation method, song creation apparatus, storage medium, and electronic device - Google Patents

Song creation method, song creation apparatus, storage medium, and electronic device Download PDF

Info

Publication number
CN115346503A
CN115346503A CN202210967108.2A CN202210967108A CN115346503A CN 115346503 A CN115346503 A CN 115346503A CN 202210967108 A CN202210967108 A CN 202210967108A CN 115346503 A CN115346503 A CN 115346503A
Authority
CN
China
Prior art keywords
song
strategy
target
interface
lyric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210967108.2A
Other languages
Chinese (zh)
Inventor
江琳
梁晓晶
李想
黄安麒
白帆
刘华平
王逸天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Cloud Music Technology Co Ltd
Original Assignee
Hangzhou Netease Cloud Music Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Netease Cloud Music Technology Co Ltd filed Critical Hangzhou Netease Cloud Music Technology Co Ltd
Priority to CN202210967108.2A priority Critical patent/CN115346503A/en
Publication of CN115346503A publication Critical patent/CN115346503A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • G10H1/0025Automatic or semi-automatic music composition, e.g. producing random music, applying rules from music theory or modifying a musical piece
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/002Instruments in which the tones are synthesised from a data store, e.g. computer organs using a common processing for different operations or calculations, and a set of microinstructions (programme) to control the sequence thereof
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/02Instruments in which the tones are synthesised from a data store, e.g. computer organs in which amplitudes at successive sample points of a tone waveform are stored in one or more memories

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure relates to a song creation method, a song creation device, a storage medium and electronic equipment, and relates to the technical field of man-machine interaction. The song creation method comprises the following steps: responding to the triggering operation of the song composition control, and displaying a song recommendation interface; the song recommendation interface comprises M songs to be imitated; responding to a song to be simulated selected from the song recommending interface, and displaying a song strategy interface; the song policy interface comprises a plurality of song policies; displaying a lyric strategy interface in response to selecting a target song strategy from the song strategy interface; the lyric strategy interface is used for selecting or inputting a lyric strategy; in response to determining a target lyric strategy from the lyric strategy interface, presenting an audio source strategy interface; and presenting the target song in response to determining the target sound source strategy from the sound source strategy interface. The present disclosure provides a new interactive format for song creation to help users quickly become familiar with the song creation process and generate created target songs.

Description

Song creation method, song creation apparatus, storage medium, and electronic device
Technical Field
Embodiments of the present disclosure relate to the field of human-computer interaction technologies, and in particular, to a song creation method, a song creation apparatus, a computer-readable storage medium, and an electronic device.
Background
This section is intended to provide a background or context to the embodiments of the disclosure, and the description herein is not admitted to be prior art by inclusion in this section.
Most of the existing music apps (applications) only support some mature creators to write songs in full link, namely, the steps of composing words, composing songs and the like included in the creation process can be performed only by means of the creation capability of the creators. However, the above scheme and product have high requirements on the authoring capability of the user, are not suitable for some new creators, cannot mobilize the authoring enthusiasm of the new creators, and have narrow application range.
Disclosure of Invention
Therefore, a song creation method is highly needed, which can create a new target song from a simulated song, so as to provide creation thought for a new person and help the new person learn the song creation method.
In this context, embodiments of the present disclosure desirably provide a song creation method, a song creation apparatus, a computer-readable storage medium, and an electronic device.
According to a first aspect of embodiments of the present disclosure, there is provided a song creation method, including: responding to the trigger operation of the song composition control, and displaying a song recommendation interface; the song recommendation interface comprises M songs to be imitated; m is an integer greater than 1; responding to a song to be simulated selected from the song recommending interface, and displaying a song strategy interface; the song strategy interface comprises a plurality of song strategies; displaying a lyric strategy interface in response to selecting a target song strategy from the song strategy interface; the lyric strategy interface is used for selecting or inputting a lyric strategy; in response to determining a target lyric policy from the lyric policy interface, presenting an audio source policy interface; the sound source strategy interface comprises a plurality of sound source strategies containing different timbres; and in response to the target sound source strategy determined from the sound source strategy interface, displaying a target song, wherein the target song is generated according to the song to be imitated, the target song strategy, the target lyric strategy and the target sound source strategy.
In an exemplary embodiment of the present disclosure, the M songs to be emulated are determined according to historical song listening data of the user; the historical song listening data of the user comprises a plurality of historical playing songs and playing parameters corresponding to each historical playing song; the playing parameters comprise playing times and/or playing duration.
In an exemplary embodiment of the present disclosure, the method further comprises: displaying a plurality of singer names and N songs to be imitated corresponding to each singer name in the song recommending interface; the singer names are obtained by statistics according to singer information corresponding to the M songs to be imitated; wherein N is an integer greater than or equal to 1, and N is less than M.
In an exemplary embodiment of the disclosure, the displaying the multiple singer names and the N songs to be imitated corresponding to each singer name in the song recommendation interface includes: presenting the plurality of singer names based on a first presentation order; the first display sequence is determined according to the similarity between singers corresponding to the singers and the singers preferred by the user, and the singers preferred by the user are determined according to the portrait characteristics and historical singing data of the user; displaying the N songs to be imitated based on a second display sequence; the second display sequence is determined according to the similarity between the N songs to be imitated and the user preference composition songs, and the user preference composition songs are determined according to the portrait characteristics and the historical composition characteristics of the user.
In an exemplary embodiment of the disclosure, the displaying the multiple singer names and the N songs to be imitated corresponding to each singer name in the song recommendation interface includes: displaying the plurality of singer names based on a third display order; the third presentation order is determined according to an initial order of the plurality of singer names; displaying the N songs to be imitated based on a fourth display sequence; the fourth display order is determined according to the ranking of the N songs to be imitated in a preset song list.
In an exemplary embodiment of the disclosure, after responding to a song to be emulated selected from the song recommendation interface, prior to presenting the song policy interface, the method further comprises: displaying the audition control of the song to be imitated in the song recommendation interface; and in response to receiving the triggering operation of the audition control, playing the song to be imitated or the appointed section of the song to be imitated.
In an exemplary embodiment of the disclosure, the lyric strategy interface further comprises a lyric strategy input control; the method further comprises the following steps: and when the number of the target lyric strategies selected from the lyric strategy interface meets a preset number condition, setting the lyric strategy input control to be in a forbidden state.
In an exemplary embodiment of the present disclosure, the lyric policy includes a lyric keyword; the method further comprises the following steps: acquiring related keywords of the lyric keywords, wherein the related keywords are generated according to the word meaning analysis result of the lyric keywords; and displaying the associated keywords to the lyric strategy interface.
In an exemplary embodiment of the present disclosure, the presenting a target song includes: displaying a playing interface of the target song, wherein the playing interface comprises a playing control; and responding to the received trigger operation of the playing control, playing the target song, and dynamically displaying the lyrics of the target song on the playing interface.
In an exemplary embodiment of the disclosure, the dynamically displaying the lyrics of the target song on the playing interface includes: and carrying out distinctive display on the lyric keywords in the lyrics.
In an exemplary embodiment of the present disclosure, the playing interface further includes a download control, where the download control is configured to download the target song and/or the creation information of the target song to a specified storage location; wherein the authoring information includes accompaniment audio of the target song and a MIDI score file.
In an exemplary embodiment of the disclosure, the playing interface further includes a sharing control, and the sharing control is used for sharing the target song to a preset application program or to a preset contact.
In an exemplary embodiment of the present disclosure, the target song is a plurality of songs; the method further comprises the following steps: displaying a playing interface corresponding to each target song; and switching the target song in response to receiving the switching operation of the playing interface.
In an exemplary embodiment of the present disclosure, the playing interface further includes a song replacing control, and the method further includes: displaying a new target song in response to receiving a triggering operation of the song replacing control; the new target songs are a plurality of songs regenerated according to the songs to be imitated, the target song strategy, the target lyric strategy and the target sound source strategy.
According to a second aspect of the present disclosure, there is provided a song authoring method comprising: extracting characteristics of the selected song to be imitated to obtain the song characteristics of the song to be imitated; the song characteristics comprise frequency spectrum characteristics, voice characteristics and rhythm characteristics; generating composition information according to the song characteristics of the song to be imitated and the selected target song strategy; generating a musical accompaniment based on the composition information; generating target lyrics according to the selected target lyric strategy and the lyrics of the song to be simulated; generating a music main melody according to the music accompaniment, the target lyric and the selected target sound source strategy; and mixing the music main melody and the music accompaniment to obtain the created target song.
In an exemplary embodiment of the disclosure, the mixing the music main melody and the music accompaniment to obtain the target song includes: dividing the music main melody into K main melody segments according to a preset time interval; and, according to the said predetermined time interval, divide the said music accompaniment into K pieces of accompaniment; k is an integer greater than 1; mixing the main melody fragment and the accompaniment fragment in the same time interval to obtain K audio fragments; and splicing the K audio clips according to the starting time and the ending time of the K audio clips to obtain the target song.
According to a third aspect of embodiments of the present disclosure, there is provided a song creating apparatus including: the song recommending module is used for responding to the triggering operation of the song creating control and displaying a song recommending interface; the song recommendation interface comprises M songs to be imitated; m is an integer greater than 1; the song strategy display module is used for responding to a song to be simulated selected from the song recommending interface and displaying a song strategy interface; the song strategy interface comprises a plurality of song strategies; the lyric strategy display module is used for responding to a target song strategy selected from the song strategy interfaces and displaying a lyric strategy interface; the lyric strategy interface is used for selecting or inputting a lyric strategy; an audio source display module for displaying an audio source strategy interface in response to determining a target lyric strategy from the lyric strategy interface; the sound source strategy interface comprises a plurality of sound source strategies containing different timbres; and the song display module is used for responding to the target sound source strategy determined from the sound source strategy interface and displaying a target song, and the target song is generated according to the song to be imitated, the target song strategy, the target lyric strategy and the target sound source strategy.
According to a fourth aspect of embodiments of the present disclosure, there is provided a song composition apparatus including: the characteristic extraction module is used for extracting the characteristics of the selected song to be simulated to obtain the song characteristics of the song to be simulated; the song characteristics comprise frequency spectrum characteristics, voice characteristics and rhythm characteristics; the composition module is used for generating composition information according to the song characteristics of the song to be imitated and the selected target song strategy; the accompaniment generating module is used for generating music accompaniment on the basis of the composition information; the lyric generating module is used for generating target lyrics according to the selected target lyric strategy and the lyrics of the song to be simulated; the main melody generating module is used for generating a music main melody according to the music accompaniment, the target lyric and the selected target sound source strategy; and the sound mixing processing module is used for carrying out sound mixing processing on the music main melody and the music accompaniment to obtain the created target song.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any one of the song creation methods described above.
According to a sixth aspect of the disclosed embodiments, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any of the song authoring methods described above via execution of the executable instructions.
According to the song creation method, the song creation device, the computer readable storage medium and the electronic equipment of the embodiment of the disclosure, the interactive process of the song creation provided by the disclosure can help creators to complete related song creation quickly and conveniently, the requirement on the creation capability of the users is reduced, so that no matter a new person or a mature creater is created, the created target songs can be easily and quickly obtained only through simple interactive operation, and further creation can be performed based on the target songs, the problem that the created songs cannot be generated due to insufficient creation capability of the users is avoided, the creation enthusiasm of the new person is improved, and meanwhile, the application range of related music APP is also improved. The process from imitation to creation provided by the disclosure enables a new person to learn the creation process of the song from imitation of the song, provides creation inspiration for the new person, and excites the creation potential of the new person.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 shows a flow diagram of a song creation method in an embodiment of the disclosure;
FIG. 2 illustrates a schematic diagram of an initial interface in an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a plurality of singer names and N songs to be imitated corresponding to each singer name in a song recommendation interface according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating a plurality of singer names based on a first presentation order and N songs to be imitated based on a second presentation order in an embodiment of the disclosure;
FIG. 5 is a schematic diagram illustrating an embodiment of the present disclosure in which an audition control is adjusted to be in a play state;
FIG. 6 is a flow chart illustrating another embodiment of the present disclosure for showing a plurality of singer names and N songs to be imitated corresponding to each singer name in a song recommendation interface;
FIG. 7 illustrates a schematic diagram of a song recommendation interface after switching of the artist name and song to be emulated in an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating an interface for screening songs to be emulated according to singer names in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating an interface for filtering songs to be emulated according to song genre in an embodiment of the present disclosure;
FIG. 10 illustrates a schematic diagram of a song policy interface in an embodiment of the disclosure;
FIG. 11 is a schematic diagram illustrating a lyrics policy interface in an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of an audio source strategy interface in an embodiment of the disclosure;
FIG. 13 is a schematic diagram illustrating adjustment of a play control to be in a play state in an embodiment of the present disclosure;
FIG. 14 illustrates a schematic view of a transition interface in an embodiment of the present disclosure;
FIG. 15 illustrates a flow diagram for generating a target song in an embodiment of the disclosure;
FIG. 16 is a schematic diagram illustrating a playback interface for a target song in an embodiment of the present disclosure;
FIG. 17 is a schematic diagram illustrating the display of an entitlement declaration prompt dialog on the playback interface in an embodiment of the present disclosure;
FIG. 18 illustrates a schematic diagram of a secondary confirmation interface in an embodiment of the disclosure;
FIG. 19 is a schematic diagram illustrating the display of the target song at a designated storage location in an embodiment of the present disclosure;
FIG. 20 illustrates a schematic interface diagram after sharing a target song to a file transfer assistant in an embodiment of the present disclosure;
FIG. 21 is a schematic diagram illustrating an interface after a preset contact clicks on a share link in an embodiment of the present disclosure;
FIG. 22 is a schematic diagram illustrating a play interface for another target song in an embodiment of the present disclosure;
FIG. 23 is a schematic diagram illustrating a playback interface for yet another target song in an embodiment of the present disclosure;
FIG. 24 illustrates a schematic view of another transition interface in an embodiment of the present disclosure;
FIG. 25 shows a schematic diagram of a song creation apparatus according to an embodiment of the present disclosure;
fig. 26 shows a schematic diagram of another song creation apparatus according to an embodiment of the present disclosure; and
fig. 27 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present disclosure will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are presented merely to enable those skilled in the art to better understand and to practice the disclosure, and are not intended to limit the scope of the disclosure in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the present disclosure, a song creation method, a song creation apparatus, a computer-readable storage medium, and an electronic device are provided.
In this document, any number of elements in the drawings is by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present disclosure are explained in detail below with reference to several representative embodiments of the present disclosure.
Summary of The Invention
The inventor finds that most of the existing music apps only support some mature creators to write songs in a full link, are not suitable for creating new people, have narrow audience space, cannot mobilize the creation enthusiasm of the new creators, and have narrow application range.
In view of the above, the basic idea of the present disclosure is: the interactive flow of song creation that this disclosure provided, can help the quick convenient completion of creation people to associate the song creation, the requirement to user self creation ability has been reduced, make no matter the creation new person, still ripe creation people, only need can be light quick through simple interactive operation and obtain the good target song of creation, and can be based on the further creation of these target songs, avoid the problem of the unable creation song that leads to because of user creation ability is not enough, the creation enthusiasm of creating new person has been improved, the application scope of associated music APP has also been promoted simultaneously. The process from imitation to creation provided by the disclosure enables a new person to learn the creation process of the song from imitation of the song, provides creation inspiration for the new person, and excites the creation potential of the new person.
Having described the basic principles of the present disclosure, various non-limiting embodiments of the present disclosure are described in detail below.
Application scene overview
It should be noted that the following application scenarios are merely illustrated to facilitate understanding of the spirit and principles of the present disclosure, and embodiments of the present disclosure are not limited in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
Embodiments of the present disclosure enable the creation of a target song based on a user selected song to be emulated, for example: in the music App, after a trigger operation of a user on a song creation control is detected, a song recommendation interface can be displayed, after the user selects a song to be imitated from the song recommendation interface, a song strategy interface can be displayed, after a target song strategy selected from the song strategy interface by the user is obtained, a lyric strategy interface can be displayed, after the user selects a target lyric strategy from the lyric strategy interface, an audio source strategy interface can be displayed, after the target audio source strategy selected from the audio source strategy interface by the user is obtained, the user terminal can upload the song to be imitated, the target song strategy, the target lyric strategy and the target audio source strategy to a service terminal, and then the service terminal receives a target song generated according to the information and displays the target song for the user to play, download and the like.
Exemplary method
An exemplary embodiment of the present disclosure first provides a song authoring method. Fig. 1 shows a flowchart of a song creation method in an embodiment of the present disclosure, which may include the following steps S110 to S150:
it should be noted that, before step S110, a song creation entry may be displayed on the music APP, and the song creation entry may display related profile information or recommendation information of song creation, so that, after the user clicks the song creation entry, an initial interface including a song creation control may be displayed, and the initial interface includes a song creation control that triggers a song creation process. Referring to fig. 2, fig. 2 shows a schematic diagram of an initial interface in the embodiment of the present disclosure, and may also refer to a function corresponding to the song creation method as an "AI — song writing assistant", and show a name of the song creation method on the initial interface, where a circle in fig. 2 shows the song creation control, and the song creation control may be designed into different shapes according to actual requirements, for example: triangle, rhombus, irregular shape, etc. can be set by oneself according to actual conditions, and this disclosure does not make special restrictions to this.
In step S110, a song recommendation interface is displayed in response to a trigger operation on the song composition control.
In this step, after the trigger operation of the user on the song creation control is detected, a song recommendation interface containing M (an integer greater than 1, which may be set according to the actual situation, and which is not specially limited by the present disclosure) songs to be simulated may be displayed.
The trigger operation may be a single-click operation, a double-click operation, a long-press operation, and the like, and may be set by itself according to an actual situation, which is not particularly limited by the present disclosure.
The M songs to be imitated may be determined by the server according to the historical song listening data of the user, and specifically, the server may obtain a plurality of historical playing songs of the user and playing parameters corresponding to each historical playing song, and screen out the M songs to be imitated from the plurality of historical playing songs according to the playing parameters of each historical song.
In an optional implementation manner, the multiple history playing songs may be history playing songs used in the past two months (which may be set according to an actual situation, and this is not specifically limited by this disclosure), and the playing parameter may be the playing times, so that the server may sort the multiple history playing songs in an order from a large playing time to a small playing time of the history playing songs, and select the top M history playing songs from the sorted sequence as the M songs to be emulated.
In another optional implementation manner, the playing parameter may be a playing time length, so that the server may sort the plurality of history playing songs in a descending order of the playing time lengths of the history playing songs, and select top M history playing songs from the sorted sequence as the M songs to be imitated.
In yet another optional implementation manner, the playing parameter may be a playing time and a playing time, for example, the playing time and the playing time of each historically played song may be weighted to obtain a comprehensive value, the multiple historically played songs are further sorted according to a descending order of the comprehensive value, and the top M historically played songs are selected from the sorted sequence as the M songs to be imitated.
After the M songs to be imitated are screened out by the server, a plurality of singer names corresponding to the M songs to be imitated can be counted, and then N (N is an integer greater than or equal to 1 and N is smaller than M) songs to be imitated corresponding to each singer name are classified.
Referring to fig. 3, fig. 3 is a flowchart illustrating a plurality of singer names and N songs to be emulated corresponding to each singer name in a song recommendation interface in an embodiment of the present disclosure, including steps S301 to S302:
in step S301, a plurality of singer names are presented based on a first presentation order.
In this step, the first display sequence may be determined by the server according to the similarity between singers corresponding to the multiple singer names and the user preference singer, and the user preference singer is determined according to the portrait characteristics and historical singing data of the user.
The portrait characteristics may include basic information of the user (e.g., age, gender, occupation, etc.), demographic characteristics (e.g., city of living), app usage preferences (e.g., song listening time period, average song listening duration per day, etc.), and historical song listening data, i.e., the historical songs and the playing duration and/or playing parameters of each historical song.
Furthermore, the server can determine a singer with user preference according to the portrait characteristics and the historical singing data, after the singer with user preference is determined, the singer corresponding to each singer name and the singer with user preference can be subjected to similarity calculation, and the sequence of similarity from large to small is determined as the first display sequence. If there are a plurality of singers preferred by the user, the average similarity between each singer and the plurality of singers preferred by the user may be calculated, and the order of the average similarity from large to small may be determined as the first presentation order.
In step S302, based on the second presentation order, N songs to be imitated are presented.
In this step, the second display sequence is determined according to the similarity between the N songs to be imitated and the user preferred composition songs, and the user preferred composition songs are determined according to the portrait characteristics and the historical composition characteristics of the user.
The server may further circle a user-preferred song according to the representation feature and the historical creation feature, after the user-preferred song is circled, the similarity between the N songs to be imitated corresponding to the name of each singer and the user-preferred song may be calculated respectively, and the sequence of the similarity from large to small is determined as the second display sequence. If there are a plurality of user-preferred songs, a similarity mean between each song to be imitated and the plurality of user-preferred songs may be calculated, and an order of the similarity mean from large to small may be determined as the second presentation order.
For example, referring to fig. 4, fig. 4 shows a schematic diagram of displaying a plurality of singer names based on a first display order and displaying N songs to be imitated based on a second display order in the embodiment of the present disclosure, and for example, an audition control of the songs to be imitated may also be displayed after each song to be imitated, so that, after a trigger operation on the audition control is detected, the songs to be imitated or climax segments of the songs to be imitated may be played, and may be set by itself according to an actual situation, which is not particularly limited by the present disclosure.
Exemplarily, after detecting a triggering operation of the user on the audition control, the display state of the audition control may be updated to adjust the audition control to be in the playing state, referring to fig. 5, fig. 5 shows a schematic diagram of adjusting the audition control to be in the playing state in the embodiment of the present disclosure, and exemplarily, if the user clicks the audition control again, the playing may be terminated, the audition control is restored to the initial state, and then, if the user clicks again, the playing is started again.
Referring to fig. 6, fig. 6 shows another flowchart for showing a plurality of singer names and N songs to be imitated corresponding to each singer name in a song recommendation interface in the embodiment of the present disclosure, which includes steps S601-S602:
in step S601, a plurality of singer names are presented based on the third presentation order.
In this step, the third presentation order may be determined according to the first order of the names of the plurality of singers. For example, the server may count initials of the singer names, and may display the singer names in an order from a to Z.
In step S602, based on the fourth display order, N songs to be imitated are displayed.
In this step, the fourth display order may be determined according to the ranking of the N songs to be imitated in the preset song list. Specifically, the server may obtain a preset song list, for example: the song list at the heat of the week, the song list at the heat of the month and the like can be set according to the actual situation, and the invention does not specially limit the songs. After a preset song list is obtained, the ranks of the N songs to be imitated in the preset song list can be respectively obtained, and the N songs to be imitated are displayed according to the sequence of the ranks from front to back.
It should be noted that, after displaying the multiple singer names and the N songs to be emulated corresponding to each singer name, the user may slide the multiple singer names up and down to switch the singer names, or slide the N songs to be emulated corresponding to each singer name up and down to switch the songs to be emulated, and it should be noted that, if the song ABCD is currently being played and the user slides up and down to the song ACCD, playing of the song ABCD is terminated. Referring to fig. 7, fig. 7 shows a schematic diagram of a song recommendation interface after switching the singer name and the song to be imitated in the embodiment of the present disclosure, and specifically, fig. 7 shows a schematic diagram of a song recommendation interface after switching the singer name "zhang san" in fig. 4 to "wang wu" and switching the song to be imitated "ABCD" in fig. 4 to "ADCD".
It should be noted that, the user may also click the "singer" or "style" control on the song recommendation interface to quickly screen out the song to be imitated that the user wants to select according to the name of the singer or the style of the song.
Referring to fig. 8, fig. 8 shows an interface schematic diagram of screening songs to be imitated according to the name of a singer in the embodiment of the present disclosure, where "caoda-guan seven" on the left is the name of the singer, and "ABC-ABZ" on the right is N songs to be imitated included in the name of the singer "lie four", and further, the user may switch the name of the singer by sliding up and down, and select a song to be imitated from the N songs to be imitated corresponding to the selected name of the singer.
Referring to fig. 9, fig. 9 is a schematic interface diagram illustrating an example of the present disclosure for screening songs to be imitated according to song styles, where "electronic-jazz" on the left is a song style, and "DD-caoda" to "GG-maba" on the right are different songs and their singer names, and further, a user may switch song styles by sliding up and down, and select a song to be imitated from the selected singer style.
After selecting the song to be emulated, the user may click the "next" control, and may proceed to step S120 to display a song policy interface in response to selecting the song to be emulated from the song recommendation interface.
In this step, in response to the user selecting a song to be emulated from the song recommendation interface, the user terminal may present a song policy interface, which may include a plurality of song policies.
The song policy refers to a simulation policy of a song, and exemplary simulation policies may include two types: the imitation singer or imitation song style can be set according to actual conditions, and the disclosure is not limited to this.
Referring to fig. 10, fig. 10 shows a schematic diagram of a song policy interface in an embodiment of the present disclosure, and a user may click on "singer" or "genre" on the song policy interface to achieve impersonation of a to-be-impersonated song from two different impersonation policies to create a target song.
After the user selects a certain song policy from the song policy interface, the song policy may be determined as a target song policy, and then, step S130 may be entered to display a lyric policy interface in response to the selection of the target song policy from the song policy interface; the lyric strategy interface is used for selecting or inputting a lyric strategy.
In this step, in response to the user having selected a target song policy from the song policy interface, the user terminal may present a lyric policy interface. The lyric strategy interface is used for selecting or inputting lyric strategies, the lyric strategies can be lyric keywords, lyric sentences and the like, and can be set according to actual conditions, and the lyric strategies are not specially limited by the disclosure.
For example, referring to fig. 11, fig. 11 shows a schematic diagram of a lyric policy interface in an embodiment of the present disclosure, where a middle portion of the lyric policy interface may display tags of a plurality of preset lyric keywords, and when a user clicks the preset keywords, the preset keywords may be selected as a target lyric policy, and at this time, the tags of the preset keywords may be grayed out so that they cannot be selected repeatedly. And when the user presses the label of the preset lyric keyword for a long time or drags the label of the preset lyric keyword to the bottom of the lyric strategy interface, the user can cancel the selection of the lyric keyword.
The lyric strategy interface may further include a lyric strategy input control (i.e., a circled plus sign control in fig. 11), and after the user clicks the lyric strategy input control, the user may input a lyric keyword by himself, and after the lyric keyword is input, the lyric keyword may be added to the tags of the preset lyric keywords in the form of a tag.
Optionally, after the user inputs a certain lyric keyword by himself, the user terminal may send the lyric keyword to the server, and then the server may perform word meaning analysis on the lyric keyword, generate an associated keyword of the lyric keyword according to a word meaning analysis result thereof, and then return the associated keyword to the user terminal to be displayed in the lyric policy interface. The number of the associated keywords may be set according to actual conditions, and the number is not particularly limited in the present disclosure.
When the number of the target lyric strategies selected by the user meets a preset number threshold (for example, 4, which can be set according to the actual situation, but is not specially limited by the present disclosure), the lyric strategy input control may be set to a disabled state.
After the user determines the target lyric strategy from the lyric strategy interface, the user may click the "next" control in fig. 11, and may proceed to step S140 to display the audio source strategy interface in response to determining the target lyric strategy from the lyric strategy interface.
In this step, in response to a user determining a target lyric strategy from the lyric strategy interface, a sound source strategy interface may be displayed, where the sound source strategy interface includes multiple sound source strategies with different timbres, and the sound source strategy may be multiple different virtual sings and iches (each virtual sings and iches corresponds to a different timbre).
For example, referring to fig. 12, fig. 12 shows a schematic diagram of a sound source policy interface in an embodiment of the present disclosure, where information (which may be an avatar or name and may be set according to an actual situation) of 3 virtual singing grams may be displayed on the sound source policy interface at each time, for example, "queen, whit, and plumet" in fig. 12, a play control may be displayed on the information of the virtual singing gram located in the middle position, and further, after the user clicks the play control, referring to fig. 13, fig. 13 shows a schematic diagram of adjusting the play control to be in a play state in the embodiment of the present disclosure, the play control is adjusted to be in the play state, and simultaneously, a tone of the virtual singing gram may be played, for example: the sound of the virtual singing Ji speaking a certain sentence or the segment of the virtual singing Ji singing a certain song can be set according to the actual situation, and the disclosure does not specially limit the sound.
Next, referring to fig. 12, it should be noted that the user may switch different virtual singing branches by sliding left and right, and when the avatar of the virtual singing branch is moved to the middle position, the avatar of the virtual singing branch may be displayed in an enlarged manner, and the playing control may be displayed on the avatar of the virtual singing branch.
Furthermore, after the user slides left and right and listens to the timbres of different virtual singing and iching, the user can select the timbre of a certain virtual singing and iching as a target sound source strategy. After determining the target audio source strategy, step S150 may be entered to display the target song in response to determining the target audio source strategy from the audio source strategy interface, the target song being generated according to the song to be emulated, the target song strategy, the target lyric strategy and the target audio source strategy.
In this step, in response to the user determining the target sound source policy from the sound source policy interface, the user terminal may send the song to be imitated, the target song policy, the target lyric policy, and the target sound source policy to the server. Furthermore, the server can generate the created target song according to the information of the song to be imitated, the target song strategy, the target lyric strategy, the target sound source strategy and the like.
It should be noted that, after the user terminal sends the song to be emulated, the target song policy, the target lyric policy, and the target sound source policy to the server, before the server returns to the target song, the user terminal may display a transition interface, referring to fig. 14, fig. 14 shows a schematic diagram of a transition interface in an embodiment of the present disclosure, for example, a progress circle may be displayed in the middle of the transition interface, and the center of the progress circle displays the generation progress of the target song, for example: 58%, the information such as the target lyric strategy or the target song strategy selected by the user can be displayed around the progress circle, and the information can be set according to the actual situation, which is not limited by the disclosure.
The server side can input the song to be imitated, the target song strategy, the target lyric strategy, the target sound source strategy and the like into a pre-trained song creation model, so that the following processing procedures are executed through the song creation model to generate the target song:
referring to fig. 15, fig. 15 shows a flowchart of generating a target song in the embodiment of the present disclosure, including steps S1501 to S1506:
in step S1501, feature extraction is performed on the selected song to be imitated, so as to obtain the song feature of the song to be imitated.
In this step, the song creation model may perform feature extraction on the song to be imitated to obtain a frequency spectrum feature, a voice feature, and a rhythm feature of the song to be imitated.
In step S1502, composition information is generated according to the song characteristics of the song to be emulated and the selected target song policy.
In this step, the song creation model may simulate the song characteristics of the song to be simulated by using the selected target song policy to generate simulated composition information. Composition is the basis of composition, and composition is the music score of a song.
In step S1503, a musical accompaniment is generated based on the composition information.
In this step, the song composition model may generate a musical accompaniment based on the composition information. The music accompaniment is the composition information, the composition is the composition base, and the proper place is provided with drumbeats, harmony sound, various electronic effects and the like.
In step S1504, target lyrics are generated according to the selected target lyric strategy and the lyrics of the song to be emulated.
In this step, the song creation model may generate a target song according to the selected target lyric policy and the lyrics of the song to be emulated. For example, the lyrics of the song to be imitated may be subjected to related synonymy or synonymy substitution, etc. to generate the target lyrics.
In step S1505, a music melody is generated according to the music accompaniment, the target lyric and the selected target sound source strategy.
In this step, the song creation simulation may control the selected target sound source strategy to sing the target lyrics based on the musical accompaniment to generate the musical theme.
In step S1506, a mixing process is performed on the main music melody and the accompaniment to obtain a target song.
In this step, the song creation model may perform sound mixing processing on the music main melody and the music accompaniment to obtain a created target song.
Mixing is a step in music production, which integrates sound from multiple sources into one stereo or monophonic audio track. These mixed sound signals, which may originate from different musical instruments, voices or orchestras, respectively, are recorded from live performances or recording rooms.
Specifically, the song creation model can divide the music main melody into K main melody segments (K is an integer greater than 1) according to a preset time interval, divide the music accompaniment into K accompaniment segments according to the preset time interval, further mix the main melody segments and the accompaniment segments in the same time interval to obtain K audio segments, and splice the K audio segments according to the starting time and the ending time of the K audio segments to obtain the target song.
For example, taking 3 as an example for the above-mentioned music main melody and music accompaniment and taking 3 as an example for explanation, the music main melody may be divided into (0-1, 1-2, 2-3) 3 main melody segments, the music accompaniment may be divided into (0-1, 1-2, 2-3) 3 accompaniment segments, further, the main melody segment and the accompaniment segment in the 0-1 duration interval may be mixed first to obtain the audio segment in the 0-1 duration interval, further, the main melody segment and the accompaniment segment in the 1-2 duration interval may be mixed to obtain the audio segment in the 1-2 duration interval, and further, the main melody segment and the accompaniment segment in the 2-3 duration interval may be mixed to obtain the audio segment in the 2-3 duration interval. Finally, the 3 audio clips can be spliced according to the starting time and the ending time of the 3 audio clips to obtain the target song.
Based on the above way of performing the audio mixing processing on the segments in the disclosure, the target song can be played after the first or the first few audio segments are obtained, so that the subsequent audio mixing process can be processed synchronously during the playing of the first or the first few audio segments, thereby shortening the waiting time of the user, enabling the user to listen to the target song in a short time (generally 3 to 5 seconds), and optimizing the user experience.
After the server generates the target song, the target song may be returned to the user terminal, where the target song may be one song or may be a batch of songs, that is, including a plurality of songs, and then the target song may be returned to the user terminal for display. Furthermore, the user terminal may display a playing interface of the target song, where the playing interface may include a playing control, and when the user triggers the playing control, the user terminal may play the target song, and may dynamically display lyrics of the target song on the playing interface while playing the target song. In addition, when the lyrics are dynamically displayed, the lyric strategy (i.e. lyric keywords) in the lyrics can be displayed distinctively, for example: the size is increased, the thickness is increased, the underline is added, the highlight is highlighted, the display is distinguished by other colors, and the like, and the setting can be carried out according to the actual situation, and the disclosure does not specially limit the display.
Specifically, when the target song is a song, reference may be made to fig. 16, and fig. 16 shows a schematic diagram of a playing interface of the target song in the embodiment of the present disclosure, a cover of the target song may be displayed in the playing interface, the cover includes a playing control, and information such as a song name and lyrics may be displayed below the cover.
As can be understood from the above explanation of step S1506, the present disclosure may start playing the target song after the first or first few audio segments of the target song, i.e., the playing interface showing the target song, and in this case, referring to fig. 16, a prompt control "8230in file generation" may be displayed on the playing interface, so that the user may play the target song but cannot download the target song.
Then, referring to the related explanation of the step S1506, after the K audio segments are spliced to obtain the target song, the prompt control may be updated to a "download production file" control, so that the user may download the target song and/or the creation information of the target song to a specified storage location by triggering the control, where the creation information includes the accompaniment audio of the target song and the MIDI score file. After the user clicks the "download production file" control, an entitlement declaration prompt box may pop up to explain the related information of the copyright, the purpose, and the like of the target song to the user, referring to fig. 17, fig. 17 shows a schematic diagram of displaying the entitlement declaration prompt box on the playing interface in the embodiment of the present disclosure, the entitlement declaration prompt box may include a cancellation control (i.e., the number x in 17), and may further include a "confirm and download" control, and after the user clicks the "confirm and download" control, the user may jump to a secondary confirmation interface. Referring to fig. 18, fig. 18 is a schematic diagram illustrating a secondary confirmation interface in an embodiment of the present disclosure, where the secondary confirmation interface may display the size of the target song, and may display a download control, and after the user clicks the download control, the user may select a specified storage location for storing the target song, and further store the specified storage location. Referring to fig. 19, fig. 19 is a schematic diagram illustrating that the target song is displayed in the designated storage location in the embodiment of the present disclosure, and in particular, an interface schematic diagram illustrating that the target song is stored in the iCloud cloud disk when the designated storage location is the iCloud cloud disk.
The playing interface may further include a sharing control (i.e., "share song" in fig. 16), and the sharing control may be configured to share the generated target song with a preset application program (e.g., weChat, circle of friends, microblog, music APP, etc.) or with a preset contact. Illustratively, referring to fig. 20, fig. 20 shows an interface schematic diagram after sharing a target song to a file transfer assistant in an embodiment of the present disclosure. The user can also share the target song with a preset contact person, and then the preset contact person can click on a related sharing link, after the preset contact person clicks on the sharing link, reference may be made to fig. 21, where fig. 21 shows an interface schematic diagram after the preset contact person clicks on the sharing link in the embodiment of the present disclosure, an author of the target song (i.e., zhao xi in the drawing), a cover of the song, lyrics, and the like may be displayed in the interface, and at the same time, a "i try me" control may also be displayed to guide the preset contact person to try a related song creation process.
Referring to fig. 16, the playing interface may further include a song changing control (i.e., a "rewriting" control in fig. 16), and after the user triggers the "rewriting" control, the server may regenerate a plurality of target songs according to the previously selected to-be-imitated song, the target song policy, the target lyric policy, and the target sound source policy, and then return the target songs to the user terminal for display.
Optionally, after the user triggers the "rewrite" control, the user may directly jump to the initial interface shown in fig. 2 again to start a new song creation process.
When the target song includes a plurality of songs, refer to fig. 22, and fig. 22 shows a schematic diagram of a playing interface of another target song in the embodiment of the present disclosure, a cover of X (e.g., 3) songs of the plurality of songs may be displayed in the playing interface, a playing control may be included on the cover of the song at the middle position, and information such as song names and lyrics at the middle position may be displayed below the cover, so that when a user slides left and right, other X songs may be switched, and the songs at the middle position may be switched, and when the songs at the middle position change, information such as song names and lyrics below the cover of the songs may also change accordingly.
It should be noted that, when the user slides to the last target song, referring to fig. 23, fig. 23 shows a schematic diagram of a playing interface of another target song in the embodiment of the present disclosure, a prompt message of "slide left and imitate 5 songs" may be displayed on the playing interface (the number of the imitation copies may be set according to an actual situation, which is not particularly limited by the present disclosure), and further, after the slide left operation of the user is detected, another transition interface may be displayed, which is used for displaying the creation progress of the 5 songs on the basis of fig. 23, referring to fig. 24, fig. 24 shows a schematic diagram of another transition interface in the embodiment of the present disclosure, and "in the imitation writes" \8230%; 56 "in fig. 24 is the creation progress of the 5 songs.
Similarly, after the 5 songs are copied, the user can try listening to each target song by sliding left and right, and then the selected target song can be downloaded to the designated storage location through the "download production file" control on the playing interface, or the selected target song can be shared to the preset application program or the preset contact through the "share song" control on the playing interface.
The interactive flow of song creation that this disclosure provided, can help the quick convenient completion of creation people to associate the song creation, the requirement to user self creation ability has been reduced, make no matter the creation new person, still ripe creation people, only need can be light quick through simple interactive operation and obtain the good target song of creation, and can be based on the further creation of these target songs, avoid the problem of the unable creation song that leads to because of user creation ability is not enough, the creation enthusiasm of creating new person has been improved, the application scope of associated music APP has also been promoted simultaneously. The process from imitation to creation provided by the disclosure enables a new person to learn the creation process of the song from imitation of the song, provides creation inspiration for the new person, and excites the creation potential of the new person.
Exemplary clothesDevice for placing
Having described the song creating method according to the exemplary embodiment of the present disclosure, next, a song creating apparatus according to the exemplary embodiment of the present disclosure will be described with reference to fig. 25 and 26.
Fig. 25 shows a schematic diagram of a song creating apparatus 2500 according to an embodiment of the present disclosure, including:
a song recommendation module 2510, configured to respond to a trigger operation on the song composition control, and display a song recommendation interface; the song recommendation interface comprises M songs to be imitated; m is an integer greater than 1;
a song policy presentation module 2520 for presenting a song policy interface in response to a song to be emulated being selected from the song recommendation interface; the song policy interface comprises a plurality of song policies;
a lyric strategy display module 2530, configured to display a lyric strategy interface in response to selecting a target song strategy from the song strategy interface; the lyric strategy interface is used for selecting or inputting a lyric strategy;
an audio source presentation module 2540 for presenting an audio source policy interface in response to determining a target lyric policy from the lyric policy interface; the sound source strategy interface comprises a plurality of sound source strategies containing different timbres;
a song display module 2550, configured to display a target song in response to determining a target audio source policy from the audio source policy interface, where the target song is generated according to the song to be emulated, the target song policy, the target lyric policy, and the target audio source policy.
In an alternative embodiment, the M songs to be imitated are determined according to historical song listening data of the user; the historical song listening data of the user comprises a plurality of historical playing songs and playing parameters corresponding to each historical playing song; the playing parameters comprise playing times and/or playing duration.
In an alternative embodiment, the song recommendation module 2510 is configured to:
displaying a plurality of singer names and N songs to be imitated corresponding to each singer name in the song recommending interface; the singer names are obtained by statistics according to singer information corresponding to the M songs to be imitated; wherein N is an integer greater than or equal to 1, and N is less than M.
In an alternative embodiment, the song recommendation module 2510 is configured to:
presenting the plurality of singer names based on a first presentation order; the first display sequence is determined according to the similarity between singers corresponding to the singers and the singers preferred by the user, and the singers preferred by the user are determined according to the portrait characteristics and historical singing data of the user; displaying the N songs to be imitated based on a second display sequence; the second display sequence is determined according to the similarity between the N songs to be imitated and user preferred composition songs, and the user preferred composition songs are determined according to the portrait characteristics and historical composition characteristics of the user.
In an alternative embodiment, the song recommendation module 2510 is configured to:
displaying the plurality of singer names based on a third display order; the third presentation order is determined according to the first order of the singer names; displaying the N songs to be imitated based on a fourth display sequence; the fourth display order is determined according to the ranking of the N songs to be imitated in a preset song list.
In an alternative embodiment, after selecting a song to be emulated in response to selecting the song from the song recommendation interface, prior to presenting the song policy interface, the song recommendation module 2510 is configured to:
displaying the audition control of the song to be imitated in the song recommendation interface; and in response to receiving the triggering operation of the audition control, playing the song to be imitated or the appointed section of the song to be imitated.
In an optional embodiment, the lyric strategy interface further comprises a lyric strategy input control; a lyric policy presentation module 2530 configured to:
and when the number of the target lyric strategies selected from the lyric strategy interface meets a preset number condition, setting the lyric strategy input control to be in a forbidden state.
In an alternative embodiment, the lyric policy includes a lyric keyword; a lyric policy presentation module 2530 configured to:
acquiring related keywords of the lyric keywords, wherein the related keywords are generated according to the word meaning analysis result of the lyric keywords; and displaying the associated keywords to the lyric strategy interface.
In an alternative embodiment, song presentation module 2550 is configured to:
displaying a playing interface of the target song, wherein the playing interface comprises a playing control; and responding to the received trigger operation of the playing control, playing the target song, and dynamically displaying the lyric of the target song on the playing interface.
In an alternative embodiment, song presentation module 2550 is configured to:
and carrying out distinctive display on the lyric keywords in the lyrics.
In an optional implementation manner, the playing interface further includes a download control, where the download control is used to download the target song and/or the creation information of the target song to a specified storage location; wherein the authoring information includes accompaniment audio of the target song and a MIDI score file.
In an optional implementation manner, the playing interface further includes a sharing control, and the sharing control is used for sharing the target song to a preset application program or to a preset contact.
In an alternative embodiment, the target song is a plurality of songs; a song presentation module 2550 configured to:
displaying a playing interface corresponding to each target song; and switching the target song in response to receiving the switching operation of the playing interface.
In an optional embodiment, the playing interface further includes a song replacing control, and the song presentation module 2550 is configured to:
displaying a new target song in response to receiving the triggering operation of the song replacing control; the new target songs are a plurality of songs regenerated according to the songs to be imitated, the target song strategies, the target lyric strategies and the target sound source strategies.
Fig. 26 is a schematic diagram of another song creation apparatus 2600 in an embodiment of the present disclosure, including:
the feature extraction module 2610 is used for extracting features of the selected song to be imitated to obtain song features of the song to be imitated; the song characteristics comprise frequency spectrum characteristics, voice characteristics and rhythm characteristics;
the composition module 2620 is used for generating composition information according to the song characteristics of the song to be imitated and the selected target song strategy;
an accompaniment generating module 2630 for generating a musical accompaniment based on the composition information;
a lyric generating module 2640, configured to generate target lyrics according to the selected target lyric policy and the lyrics of the song to be emulated;
a main melody generating module 2650, configured to generate a music main melody according to the music accompaniment, the target lyric, and the selected target sound source strategy;
the audio mixing processing module 2660 is configured to perform audio mixing processing on the music main melody and the music accompaniment to obtain a created target song.
In an alternative embodiment, the remix processing module 2660 is configured to:
dividing the music main melody into K main melody segments according to a preset time interval; and, according to the said predetermined time interval, divide the said music accompaniment into K pieces of accompaniment; k is an integer greater than 1; mixing the main melody fragment and the accompaniment fragment in the same time interval to obtain K audio fragments; and splicing the K audio segments according to the starting time and the ending time of the K audio segments to obtain the target song.
In addition, other specific details of the embodiments of the present disclosure have been described in detail in the embodiments of the invention of the above method, and are not described herein again.
Exemplary storage Medium
The storage medium of the exemplary embodiment of the present disclosure is explained below.
In the present exemplary embodiment, the above-described method may be implemented by a program product, such as a portable compact disc read only memory (CD-ROM) and including program code, and may be executed on a device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RE, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device through any kind of network, including a local area network (FAN) or a Wide Area Network (WAN), or may be connected to external computing devices (e.g., through the internet using an internet service provider).
Exemplary electronic device
An electronic device of an exemplary embodiment of the present disclosure is explained with reference to fig. 27.
The electronic device 2700 shown in fig. 27 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 27, an electronic device 2700 is in the form of a general purpose computing device. Components of electronic device 2700 may include, but are not limited to: at least one processing unit 2710, at least one memory unit 2720, a bus 2730 that connects the various system components (including the memory unit 2720 and the processing unit 2710), and a display unit 2740.
Where the memory unit stores program code, the program code may be executed by the processing unit 2710, causing the processing unit 2710 to perform the steps according to various exemplary embodiments of the present disclosure described in the above section "exemplary methods" of this specification. For example, processing unit 2710 may perform method steps or the like as shown in fig. 1.
The memory unit 2720 can include volatile memory units such as a random access memory unit (RAM) 2721 and/or a cache memory unit 2722, and can further include a read only memory unit (ROM) 2723.
The storage unit 2720 may also include a program/utility 2724 having a set (at least one) of program modules 2725, such program modules 2725 including but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The bus 2730 may include a data bus, an address bus, and a control bus.
Electronic device 2700 may also communicate with one or more external devices 2800 (e.g., keyboard, pointing device, bluetooth device, etc.) through input/output (I/O) interface 2750. The electronic device 2700 further includes a display unit 2740 connected to an input/output (I/O) interface 2750 for displaying. Also, electronic device 2700 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through network adapter 2760. As shown, network adapter 2760 communicates with the other modules of electronic device 2700 over bus 2730. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with electronic device 2700, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
It should be noted that although in the above detailed description several modules or sub-modules of the apparatus are mentioned, such division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that the present disclosure is not limited to the particular embodiments disclosed, nor is the division of aspects which is intended to be construed to be merely illustrative of the fact that features of the aspects may be combined to advantage. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A song authoring method, comprising:
responding to the triggering operation of the song composition control, and displaying a song recommendation interface; the song recommendation interface comprises M songs to be imitated; m is an integer greater than 1;
responding to a song to be simulated selected from the song recommending interface, and displaying a song strategy interface; the song policy interface comprises a plurality of song policies;
in response to a target song strategy selected from the song strategy interfaces, displaying a lyric strategy interface; the lyric strategy interface is used for selecting or inputting a lyric strategy;
in response to determining a target lyric policy from the lyric policy interface, presenting an audio source policy interface; the sound source strategy interface comprises a plurality of sound source strategies containing different timbres;
and in response to the target sound source strategy determined from the sound source strategy interface, displaying a target song, wherein the target song is generated according to the song to be imitated, the target song strategy, the target lyric strategy and the target sound source strategy.
2. The method according to claim 1, wherein the M songs to be imitated are determined according to historical song listening data of a user;
the historical song listening data of the user comprises a plurality of historical playing songs and playing parameters corresponding to each historical playing song;
the playing parameters comprise playing times and/or playing duration.
3. The method of claim 1, further comprising:
displaying a plurality of singer names and N songs to be imitated corresponding to each singer name in the song recommending interface; the singer names are obtained by statistics according to singer information corresponding to the M songs to be imitated;
wherein N is an integer greater than or equal to 1, and N is less than M.
4. The method of claim 3, wherein the presenting the plurality of singer names and the N songs to be imitated corresponding to each singer name in the song recommendation interface comprises:
presenting the plurality of singer names based on a first presentation order; the first display sequence is determined according to the similarity between singers corresponding to the singers and the singers preferred by the user, and the singers preferred by the user are determined according to the portrait characteristics and historical singing data of the user;
displaying the N songs to be imitated based on a second display sequence; the second display sequence is determined according to the similarity between the N songs to be imitated and user preferred composition songs, and the user preferred composition songs are determined according to the portrait characteristics and historical composition characteristics of the user.
5. The method of claim 3, wherein the presenting the plurality of singer names and the N songs to be imitated corresponding to each singer name in the song recommendation interface comprises:
displaying the plurality of singer names based on a third display order; the third presentation order is determined according to the first order of the singer names;
displaying the N songs to be imitated based on a fourth display sequence; the fourth display order is determined according to the ranking of the N songs to be imitated in a preset song list.
6. A song creation method, comprising:
extracting characteristics of the selected song to be imitated to obtain the song characteristics of the song to be imitated; the song characteristics comprise frequency spectrum characteristics, voice characteristics and rhythm characteristics;
generating composition information according to the song characteristics of the song to be imitated and the selected target song strategy;
generating a musical accompaniment based on the composition information;
generating target lyrics according to the selected target lyric strategy and the lyrics of the song to be simulated;
generating a music main melody according to the music accompaniment, the target lyric and the selected target sound source strategy;
and mixing the music main melody and the music accompaniment to obtain the created target song.
7. A song authoring apparatus, comprising:
the song recommending module is used for responding to the triggering operation of the song creating control and displaying a song recommending interface; the song recommendation interface comprises M songs to be imitated; m is an integer greater than 1;
the song strategy display module is used for responding to a song to be simulated selected from the song recommending interface and displaying a song strategy interface; the song policy interface comprises a plurality of song policies;
the lyric strategy display module is used for responding to a target song strategy selected from the song strategy interfaces and displaying a lyric strategy interface; the lyric strategy interface is used for selecting or inputting a lyric strategy;
an audio source presentation module for presenting an audio source policy interface in response to determining a target lyric policy from the lyric policy interface; the sound source strategy interface comprises a plurality of sound source strategies containing different timbres;
and the song display module is used for responding to the target sound source strategy determined from the sound source strategy interface and displaying a target song, and the target song is generated according to the song to be imitated, the target song strategy, the target lyric strategy and the target sound source strategy.
8. A song authoring apparatus, comprising:
the characteristic extraction module is used for extracting the characteristics of the selected song to be imitated to obtain the song characteristics of the song to be imitated; the song characteristics comprise frequency spectrum characteristics, voice characteristics and rhythm characteristics;
the composition module is used for generating composition information according to the song characteristics of the song to be imitated and the selected target song strategy;
the accompaniment generating module is used for generating music accompaniment on the basis of the composition information;
the lyric generating module is used for generating target lyrics according to the selected target lyric strategy and the lyrics of the song to be simulated;
the main melody generating module is used for generating a music main melody according to the music accompaniment, the target lyric and the selected target sound source strategy;
and the sound mixing processing module is used for carrying out sound mixing processing on the music main melody and the music accompaniment to obtain the created target song.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 6.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1-6 via execution of the executable instructions.
CN202210967108.2A 2022-08-11 2022-08-11 Song creation method, song creation apparatus, storage medium, and electronic device Pending CN115346503A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210967108.2A CN115346503A (en) 2022-08-11 2022-08-11 Song creation method, song creation apparatus, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210967108.2A CN115346503A (en) 2022-08-11 2022-08-11 Song creation method, song creation apparatus, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
CN115346503A true CN115346503A (en) 2022-11-15

Family

ID=83952214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210967108.2A Pending CN115346503A (en) 2022-08-11 2022-08-11 Song creation method, song creation apparatus, storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN115346503A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200372896A1 (en) * 2018-07-05 2020-11-26 Tencent Technology (Shenzhen) Company Limited Audio synthesizing method, storage medium and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200372896A1 (en) * 2018-07-05 2020-11-26 Tencent Technology (Shenzhen) Company Limited Audio synthesizing method, storage medium and computer equipment

Similar Documents

Publication Publication Date Title
US9355627B2 (en) System and method for combining a song and non-song musical content
US8438482B2 (en) Interactive multimedia content playback system
CN106708894B (en) Method and device for configuring background music for electronic book
US9030413B2 (en) Audio reproducing apparatus, information processing apparatus and audio reproducing method, allowing efficient data selection
US20160239876A1 (en) Musically contextual audio advertisements
US20190103083A1 (en) Singing voice edit assistant method and singing voice edit assistant device
Nunes et al. I like the way it sounds: The influence of instrumentation on a pop song’s place in the charts
CN114023301A (en) Audio editing method, electronic device and storage medium
US10497347B2 (en) Singing voice edit assistant method and singing voice edit assistant device
WO2022177687A1 (en) Musical composition file generation and management system
CN102163220B (en) Song transition metadata
CN102124488A (en) Game data generation based on user provided song
CN115346503A (en) Song creation method, song creation apparatus, storage medium, and electronic device
US20140122606A1 (en) Information processing device, information processing method, and program
US20220406283A1 (en) Information processing apparatus, information processing method, and information processing program
Macchiusi " Knowing is Seeing:" The Digital Audio Workstation and the Visualization of Sound
CN111223470A (en) Audio processing method and device and electronic equipment
JP2013092912A (en) Information processing device, information processing method, and program
CN113821189A (en) Audio playing method and device, terminal equipment and storage medium
KR102643081B1 (en) Method and apparatus for providing audio mixing interface and playlist service using real-time communication
KR20140054810A (en) System and method for producing music recorded, and apparatus applied to the same
WO2024066790A1 (en) Audio processing method and apparatus, and electronic device
US20220406280A1 (en) Information processing apparatus, information processing method, and information processing program
WO2023112534A1 (en) Information processing device, information processing method, and program
Meikle ScreenPlay: A topic-theory-inspired interactive system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination