CN114974184A - Audio production method and device, terminal equipment and readable storage medium - Google Patents

Audio production method and device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN114974184A
CN114974184A CN202210563372.XA CN202210563372A CN114974184A CN 114974184 A CN114974184 A CN 114974184A CN 202210563372 A CN202210563372 A CN 202210563372A CN 114974184 A CN114974184 A CN 114974184A
Authority
CN
China
Prior art keywords
audio
user
singing
song
target song
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210563372.XA
Other languages
Chinese (zh)
Inventor
方晓胤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
Original Assignee
Migu Cultural Technology Co Ltd
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Migu Cultural Technology Co Ltd, China Mobile Communications Group Co Ltd, MIGU Music Co Ltd filed Critical Migu Cultural Technology Co Ltd
Priority to CN202210563372.XA priority Critical patent/CN114974184A/en
Publication of CN114974184A publication Critical patent/CN114974184A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0091Means for obtaining special acoustic effects
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The invention discloses an audio production method, an audio production device, terminal equipment and a readable storage medium, wherein the method comprises the following steps: acquiring singing audio of a first user aiming at a target song; acquiring a sound characteristic parameter of a first user, wherein the sound characteristic parameter comprises at least one of a sound ray parameter, a tone parameter and a range parameter, and the sound characteristic parameter is acquired based on corresponding audio data of the first user when reciting or singing target content of a target song; and synthesizing the singing audio according to the sound characteristic parameters to obtain the singing audio of the first user aiming at the target song. The method synthesizes the singing audio according to the sound characteristic parameters of the first user to obtain the singing audio of the first user for the target song, so that the obtained singing audio of the first user for the target song has better singing quality, and the singing quality of the first user for the target song is improved while the singing experience of the user is improved.

Description

Audio production method and device, terminal equipment and readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an audio generating method and apparatus, a terminal device, and a readable storage medium.
Background
Along with the development and progress of society, people's entertainment is more and more diversified nowadays, and listening to songs and singing songs are the most popular entertainment items of the public. The existing music software can be used for recording music, playing music and finely adjusting audio frequency by a user to obtain the audio frequency effect required by the user, and can also be used for providing a singing platform for singing songs. When a user sings a song, the user can randomly select the song to sing in a normal situation, however, in this way of singing the song, the user may not sing the song to sing or the singing quality of the song obtained by final singing is poor, so that the user experience is poor.
The above-mentioned contents are only for assisting understanding of the technical scheme of the present invention, and do not represent an admission that the above-mentioned contents are related art.
Disclosure of Invention
The embodiment of the invention provides an audio making method, an audio making device, terminal equipment and a readable storage medium, and aims to solve the technical problem that in the existing song singing mode, a user may not sing a song or the singing quality of a song finally obtained by singing is poor, so that the user experience is poor.
The embodiment of the invention provides an audio production method, which comprises the following steps:
acquiring singing audio of a first user aiming at a target song;
acquiring a sound characteristic parameter of the first user, wherein the sound characteristic parameter comprises at least one of a sound ray parameter, a tone parameter and a range parameter, and the sound characteristic parameter is acquired based on corresponding audio data of the first user when reciting or singing the target content of the target song;
and synthesizing the singing audio according to the sound characteristic parameters to obtain the singing audio of the first user aiming at the target song.
Optionally, the step of acquiring the sound characteristic parameter of the first user includes:
acquiring keywords respectively corresponding to each song paragraph of the target song, and outputting the keywords, wherein the target content of the target song comprises the keywords;
acquiring audio data corresponding to the first user based on the keywords;
and identifying the audio data to acquire the sound characteristic parameters of the first user.
Optionally, after the step of synthesizing the singing audio according to the sound characteristic parameter to obtain the singing audio of the first user for the target song, the method further includes:
determining chorus paragraphs matched with the sound characteristic parameters in the target song according to the matching degree of the sound characteristic parameters and the audio characteristic parameters corresponding to the song paragraphs of the target song;
generating a first sub audio corresponding to the chorus paragraph according to the singing audio;
generating second sub-audio corresponding to other song paragraphs except the chorus paragraph according to the singing audio associated with a second user;
and generating a chorus song audio according to the first sub audio and the second sub audio.
Optionally, according to the singing audio, the step of generating the first sub-audio corresponding to the chorus passage includes:
acquiring audio data corresponding to the vocal of the chorus paragraph in the singing audio to generate a first sub audio corresponding to the chorus paragraph; alternatively, the first and second electrodes may be,
and deleting the audio data corresponding to other song paragraphs except the chorus paragraph from the singing audio so as to generate a first sub audio corresponding to the chorus paragraph.
Optionally, the audio production method further comprises:
determining the second user who sings the target song with the first user;
and when the second user sings the target song, executing the step of generating second sub-audio corresponding to other song paragraphs except the chorus passage according to the singing audio associated with the second user.
Optionally, after the step of determining the second user who sings the target song with the first user, the method further includes:
when the second user does not sing the target song, acquiring a sound characteristic parameter of the second user;
generating a singing audio of the second user for the target song according to the sound characteristic parameters of the second user and the singing audio corresponding to the target song;
associating the second user with the singing audio.
Optionally, the singing audio associated with the second user is the audio recorded when the second user sings the target song.
In addition, to achieve the above object, the present invention also provides an audio producing apparatus, including:
the first acquisition module is used for acquiring the singing audio of a first user aiming at a target song;
a second obtaining module, configured to obtain a sound characteristic parameter of the first user, where the sound characteristic parameter includes at least one of a sound ray parameter, a timbre parameter, a pitch parameter, and a vocal range parameter, and the sound characteristic parameter is obtained based on audio data corresponding to the first user when reciting or singing target content of the target song;
and the audio synthesis module is used for synthesizing the singing audio according to the sound characteristic parameters to obtain the singing audio of the first user aiming at the target song.
In addition, to achieve the above object, the present invention also provides a terminal device, including: the audio production method comprises a memory, a processor and an audio production program which is stored on the memory and can run on the processor, wherein the audio production program realizes the steps of the audio production method when being executed by the processor.
In addition, to achieve the above object, the present invention further provides a readable storage medium storing an audio producing program, which when executed by a processor, implements the steps of the audio producing method described above.
According to the audio production method, the device, the terminal device and the readable storage medium provided by the embodiment of the invention, the singing audio of the first user for the target song is obtained by synthesizing the singing audio according to the sound characteristic parameters of the first user, the method is suitable for the sound characteristic parameters of the first user, and the singing audio of the first user for singing the target song through the sound of the first user is adjusted and synthesized, so that the obtained singing audio of the first user for the target song accords with the sound characteristic of the sound of the first user, the singing audio of the first user for singing the target song through the sound of the first user is improved, the finally obtained singing audio has better singing quality, the purpose of improving the singing quality of the target song of the first user is achieved, and meanwhile, the singing experience of the user is improved.
Drawings
Fig. 1 is a schematic structural diagram of a terminal device according to various embodiments of an audio production method of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of an audio generating method according to the present invention;
FIG. 3 is a schematic flow chart illustrating a process of obtaining sound characteristic parameters according to a first embodiment of the audio production method of the present invention;
FIG. 4 is a flowchart illustrating a second embodiment of the audio production method according to the present invention;
FIG. 5 is a flowchart illustrating a second embodiment of the audio production method according to the present invention;
fig. 6 is a schematic diagram of module components of the audio producing apparatus according to the present invention.
Detailed Description
In order to better understand the above technical solution, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The invention provides an audio making method, which comprises the following steps:
acquiring singing audio of a first user aiming at a target song;
acquiring a sound characteristic parameter of the first user, wherein the sound characteristic parameter comprises at least one of a sound ray parameter, a tone parameter and a range parameter, and the sound characteristic parameter is acquired based on corresponding audio data of the first user when reciting or singing the target content of the target song;
and synthesizing the singing audio according to the sound characteristic parameters to obtain the singing audio of the first user aiming at the target song.
The audio production method provided by the invention synthesizes the singing audio according to the sound characteristic parameters of the first user to obtain the singing audio of the first user for the target song, can adapt to the sound characteristic parameters of the first user, adjusts and synthesizes the singing audio of the first user for the target song through the sound of the first user, so that the obtained singing audio of the first user for the target song accords with the sound characteristic of the sound of the first user, improves the singing audio of the first user for the target song through the sound of the first user, ensures that the finally obtained singing audio has better singing quality, and improves the singing experience of the user while achieving the purpose of improving the singing quality of the target song by the first user.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal device according to various embodiments of an audio production method of the present invention. The terminal device related to the audio production method of the present invention may include terminal devices such as a mobile phone, a tablet computer, a notebook computer, a palm computer, and a Personal Digital Assistant (PDA).
As shown in fig. 1, the terminal device may include: a memory 101 and a processor 102. Those skilled in the art will appreciate that the block diagram of the terminal shown in fig. 1 does not constitute a limitation of the terminal, and that the terminal may include more or less components than those shown, or may combine certain components, or a different arrangement of components. The memory 101 stores therein an operating device and an audio creating program. The processor 102 is a control center of the terminal device, and the processor 102 executes the audio producing program stored in the memory 101 to implement the steps of the embodiments of the audio producing method of the present invention.
Optionally, the terminal device may further include a Display unit 103, where the Display unit 103 includes a Display panel, and the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like, and is used for outputting an interface for displaying browsing of a user.
Optionally, the terminal device may further include a communication unit, and the communication unit establishes data communication (the data communication may be IP communication or bluetooth channel) with other terminal devices, such as a computer, through a network protocol, so as to implement data transmission with the other terminal devices.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than that shown or described herein.
Based on the structural block diagram of the terminal device, the embodiments of the audio production method of the invention are provided. In a first embodiment, the present invention provides an audio production method, please refer to fig. 2, and fig. 2 is a flowchart illustrating the audio production method according to the first embodiment of the present invention. In this embodiment, the audio production method comprises the steps of:
step S10, acquiring the singing audio of the first user aiming at the target song;
the singing audio of the first user for the target song refers to the audio corresponding to the target song sung by the first user.
The method includes acquiring a singing audio of a first user for a target song, prerecording the audio of the first user for singing the target song, and storing the audio to acquire the singing audio of the first user for the target song from a stored target storage area, and also acquiring the singing audio of the first user for the target song in real time when the first user sings the target song, which is not limited in this embodiment.
Step S20, obtaining the sound characteristic parameter of the first user,
the sound characteristic parameters comprise at least one of sound ray parameters, tone parameters and range parameters, and the sound characteristic parameters are obtained based on corresponding audio data of the first user when reciting or singing the target content of the target song;
the sound characteristic parameters include a sound ray parameter, a tone color parameter, a pitch parameter, and a register parameter. The range parameter refers to the parameter range from the lowest pitch to the highest pitch that can be achieved by human voice or musical instruments. In this embodiment, the sound characteristic parameter is obtained based on the corresponding audio data of the first user when reciting or singing the target content of the target song.
The voice characteristic parameters of the first user are obtained, the audio data corresponding to the target content of the target song recited by the first user or sung by the first user can be obtained, and the obtained audio data is analyzed and identified through the audio analysis software, so that the voice characteristic parameters of the first user are obtained.
Optionally, the audio data corresponding to the reading or singing of the target content of the target song by the first user may be obtained directly when the first user reads or sings the target content of the target song, or the audio data corresponding to the reading or singing of the target content of the target song by the first user may be recorded first and obtained indirectly by obtaining the recorded audio data, which is not limited in this embodiment.
Optionally, the target content of the target song includes at least one of all or part of lyrics corresponding to the target song, keywords in the lyrics corresponding to the target song, and all or part of melody corresponding to the target song.
As an alternative implementation manner, please refer to fig. 3, where fig. 3 is a schematic flowchart illustrating a process of acquiring sound characteristic parameters according to a first embodiment of the audio production method of the present invention, and step S20 includes:
step S21, obtaining keywords respectively corresponding to each song paragraph of the target song, and outputting the keywords, wherein the target content of the target song comprises the keywords;
step S22, acquiring audio data corresponding to the first user based on the keyword;
step S23, the audio data is identified to obtain the sound characteristic parameters of the first user.
Note that a tag may be given to the target song in advance. Tags include, but are not limited to, song language including, but not limited to, national, cantonese, english, and french, song emotion including, but not limited to, happiness, anger, sadness, excitement, and angry, and song style including, but not limited to, pop, ballad, rock, and rap.
In addition, corresponding to each target song, paragraph labels can be preset for each song paragraph of each target song through the lyrics, the melodies and the emotion key embodied by the voices corresponding to each song paragraph in the target song, wherein the paragraph labels include but are not limited to emotion labels, style labels, tone parameters and range parameters.
Optionally, each song paragraph in the target song may be a lyric paragraph corresponding to the target song, where the lyric paragraph may be one sentence of lyrics, or may be at least two sentences of lyrics in succession.
Optionally, the manner of obtaining the target song may be obtained from a song library associated with the music playing software, may also be obtained from a locally downloaded and stored song, and may also be obtained by receiving, from a chorus platform provided by the music software, a song selected by the song selection instruction of the first user from the chorus platform, which is not limited in this embodiment.
Optionally, the target song may be obtained by obtaining a psychological state of the first user and then pushing the song to the first user according to the psychological state, and for example, on the premise of authorization of the first user, obtaining dynamic information of the first user using social software, and analyzing the dynamic information to obtain the psychological state of the first user. The dynamic information of the social software comprises friend circle information, microblog information, chatting information of a specific time period, user chatting information within 24-48 hours and the like. And analyzing the dynamic information to judge the psychological states of the first user, such as the living state and the mood state, such as work promotion, marriage, love, loss and the like, so as to push the songs to the first user according to the psychological states, and enable the pushed songs to be more in the living state and the mood state of the first user.
It should be noted that the keywords may be used to obtain the sound characteristic parameters of the user himself.
The method comprises the steps of obtaining keywords corresponding to each song paragraph of a target song respectively, outputting the keywords, reading aloud or singing by a first user according to the output keywords, further obtaining audio data corresponding to the keywords of the first user, identifying the audio data, and obtaining sound characteristic parameters of the first user.
Optionally, the manner of determining the keywords respectively corresponding to each song paragraph of the target song may be to obtain lyrics corresponding to the highest tone and/or the lowest tone in each song paragraph, to determine the lyrics as the keywords, and further obtain the voice characteristic parameters of the user through the keywords.
Step S30, synthesizing the singing audio according to the sound characteristic parameters, and obtaining the singing audio of the first user for the target song.
The singing audio is choreographed according to the sound characteristic parameters to obtain the singing audio of the first user aiming at the target song, and the sound characteristic parameters of the first user such as sound ray parameters, tone parameters, sound range parameters and the like can be adopted to adjust various parameters through audio editing software based on the audio data of the singing audio so as to synthesize the singing audio according to the sound characteristic parameters to obtain the singing audio of the first user aiming at the target song.
Compared with the singing audio of the first user for the target song, the method has the advantages that the singing audio of the first user for the target song is synthesized according to the sound characteristic parameters of the first user, the singing audio of the first user for the target song is obtained, the method is suitable for the sound characteristic parameters of the first user, the singing audio of the first user for singing the target song through the sound of the first user is adjusted and synthesized, the obtained singing audio of the first user for the target song accords with the sound characteristic of the sound of the first user, the singing audio of the first user for singing the target song through the sound of the first user is improved, the finally obtained singing audio has better singing quality, the purpose of improving the singing quality of the target song of the first user is achieved, and meanwhile the singing experience of the user is improved.
Optionally, in an actual application process, the moods of the users are different, and correspondingly, the sound characteristic parameters of the users are different. When the voice characteristic parameters are based on audio data corresponding to the fact that the first user recites or sings the target content of the target song in real time, the audio data are analyzed and recognized through audio analysis software, the audio data are obtained directly, the current psychological state of the first user can be fed back in real time through the voice characteristic parameters, the psychological state is such as the emotional state and/or the emotional state of the user, the singing audio of the first user aiming at the target song is obtained through synthesis of the voice characteristic parameters of the first user, finally obtained singing audio can have the voice characteristics according with the voice of the first user, the singing audio of the target song is singed through the voice of the first user, meanwhile the current psychological state of the first user can be fed back, and the singing audio has the personal characteristics of the user.
In the technical scheme disclosed in this embodiment, the singing audio is synthesized according to the sound characteristic parameters of the first user, so that the singing audio of the first user for the target song is obtained, and the method is suitable for the sound characteristic parameters of the first user.
Based on the first embodiment, a second embodiment of the audio production method of the present invention is provided, please refer to fig. 4, and fig. 4 is a flowchart illustrating the second embodiment of the audio production method of the present invention. In this embodiment, after step S30, the method further includes:
step S40, determining chorus paragraphs matched with the sound characteristic parameters in the target song according to the matching degree of the sound characteristic parameters and the audio characteristic parameters corresponding to each song paragraph of the target song;
the audio characteristic parameter corresponding to each song paragraph of the target song refers to the determined audio characteristic parameter corresponding to each song paragraph based on the standard audio associated with the target song. Optionally, the standard audio associated with the target song may refer to an audio corresponding to the target song issued by an original singer corresponding to the target song; or may be an audio corresponding to the target song that is sung by the singer turning over the target song, which is not limited in this embodiment.
The method includes that corresponding to the sound characteristic parameters, the audio characteristic parameters corresponding to each song paragraph include but are not limited to at least one of sound ray parameters, tone parameters and vocal range parameters, the audio characteristic parameters corresponding to each song paragraph of the target song and the sound characteristic parameters are compared through at least one of the four dimensions of sound ray, tone and vocal range, the matching degree of the audio characteristic parameters and the sound characteristic parameters is determined according to the comparison result, further, the chorus paragraph matched with the sound characteristic parameters in the target song is determined according to the matching degree, namely, the chorus paragraph according with the sound characteristic of the first user is determined from the target song, and therefore the audio data obtained when the first user performs the chorus paragraph in the target song has high singing quality.
Determining the matching degree of the audio characteristic parameter and the sound characteristic parameter according to the comparison result, for example, comparing the audio characteristic parameter and the sound characteristic parameter corresponding to each song paragraph of the target song from four dimensions of sound line, tone and range, determining the number of matched dimensions according to the comparison result, determining the matching degree according to the number of dimensions, for example, comparing whether the sound line parameter in the audio characteristic parameter is the same as or similar to the sound line parameter in the sound characteristic parameter, if so, the number of dimensions is +1, and if the numerical range corresponding to the range parameter in the sound characteristic parameter includes the numerical range corresponding to the range parameter in the audio characteristic parameter, if so, the number of dimensions is +1, and if the comparison result of two latitudes of sound line and range in the four dimensions of sound line, tone and range is matching, the number of dimensions is 2, the matching degree determined according to the number of dimensions is 2.
And determining the chorus paragraphs matched with the sound characteristic parameters according to the matching degree, and determining the song paragraphs of the target song as the chorus paragraphs matched with the sound characteristic parameters when the matching degree is greater than or equal to the preset matching degree.
Step S50, generating a first sub-audio corresponding to the chorus paragraph according to the singing audio;
step S60, generating a second sub-audio corresponding to other song paragraphs except the chorus paragraph according to the singing audio associated with a second user;
and step S70, generating chorus song audio according to the first sub audio and the second sub audio.
The singing audio associated with the second user refers to the audio of all or part of the song passage corresponding to the target song performed by the second user. Wherein, the second user refers to a user singing the target song together with the first user.
Alternatively, the number of the second users may be one or at least two.
Optionally, after step S30, the method includes: associating the first user with the singing audio or associating the first user, the singing audio, and the singing audio.
As an optional implementation manner, according to a singing audio, a first sub audio corresponding to a chorus paragraph is generated, after the singing audio associated with a first user can be obtained, audio data corresponding to voices of other song paragraphs except the chorus paragraph in the singing audio is deleted through audio editing software to generate a first sub audio corresponding to the chorus paragraph, correspondingly, according to the singing audio associated with a second user, a second sub audio corresponding to other song paragraphs except the chorus paragraph is generated, after the singing audio associated with the second user is obtained, audio data corresponding to voices of other song paragraphs except the chorus paragraph in the singing audio is obtained through audio editing software to generate a second sub audio, and then the chorus song audio is generated according to the first sub audio and the second sub audio to obtain the chorus.
As an optional implementation manner, according to a singing audio, a first sub audio corresponding to a chorus paragraph is generated, after the singing audio associated with a first user can be obtained, audio data corresponding to voices of other song paragraphs except the chorus paragraph in the singing audio is deleted through audio editing software to generate a first sub audio corresponding to the chorus paragraph, correspondingly, according to the singing audio associated with a second user, a second sub audio corresponding to other song paragraphs except the chorus paragraph is generated, after the singing audio associated with the second user is obtained, audio data corresponding to voices of other song paragraphs except the chorus paragraph in the singing audio is obtained through audio editing software to generate a second sub audio, and then the chorus song audio is generated according to the first sub audio and the second sub audio to obtain the chorus.
Optionally, the singing audio associated with the second user is audio recorded while the second user sings the chorus song.
Compared with a method of simultaneously finishing chorus online by users who need to sing a chorus while chorus a song, in the embodiment, a chorus audio obtained by recording and synthesizing when a first user sings a target song can be used as a singing audio associated with the first user in advance, and/or an audio recorded when a second user sings a chorus song can be used as a singing audio associated with the second user in advance, so that the chorus song can be realized on line by the first user and the second user without the need of simultaneously realizing chorus on line by the first user and the second user, a more flexible chorus mode without time and space limitation can be realized, and the purpose of finishing chorus songs by the first user and the second user can be achieved.
In the technical scheme disclosed in this embodiment, on the basis of obtaining the voice characteristic parameter of the first user, according to the matching degree between the voice characteristic parameter corresponding to each song paragraph of the target song and the voice characteristic parameter, determining a chorus paragraph in the target song matching the voice characteristic parameter, so as to determine a chorus paragraph in the target song meeting the voice characteristic of the first user for singing, so that the first user can easily sing the chorus paragraph in the target song, thereby improving the user experience of the first user, according to the singing audio of the first user, generating a first sub-audio corresponding to the chorus paragraph, so as to obtain a first sub-audio corresponding to the chorus paragraph in which the first user sings and meets the voice characteristic of the first user, so that the audio data obtained when the first user sings the chorus paragraph in the target song has higher singing quality, according to the singing audio associated with the second user, and generating second sub-audio corresponding to other song paragraphs except the chorus paragraph to obtain second sub-audio corresponding to other song paragraphs except the chorus paragraph sung by the second user, and further generating chorus song audio according to the first sub-audio and the second sub-audio, so that the chorus quality of the chorus song audio sung by the first user and the second user is improved on the basis of successfully obtaining the chorus song audio sung by the first user and the second user.
Referring to fig. 5, fig. 5 is a flowchart illustrating a third embodiment of an audio production method according to the present invention. In this embodiment, the audio production method further comprises the steps of:
step S80, determining the second user who sings the target song with the first user;
step S90, when the second user has performed the target song, execute step S60.
Determining a second user singing the target song together with the first user, and outputting a selection interface containing a user to be sung based on a chorus platform provided by music software, so that when the selection interface receives a selection instruction, the user to be sung selected by the selection instruction is obtained to be used as the second user singing the target song together with the first user; the second user singing the target song together with the first user may also be determined by inputting a name corresponding to the second user, which is not limited in this embodiment.
It is understood that in practical applications, the second user may be a user who has never issued a song, or a user who has never performed a target song.
And determining whether the second user sings the target song, searching for the song sung by the second user through music playing software, if the target song matched with the target song exists in the searched song, indicating that the second user sings the target song, and if the target song matched with the target song does not exist in the searched song, indicating that the second user does not sing the target song.
When the second user has sung the target song, step S60 may be executed, that is, according to the singing audio associated with the second user, the second sub audio corresponding to the other song paragraphs except the chorus paragraph is generated. In this embodiment, the singing audio associated with the second user is the acquired song audio corresponding to the target song that the second user has performed.
As an alternative implementation, after step S80, the method further includes:
when the second user does not sing the target song, acquiring a sound characteristic parameter of the second user;
generating a singing audio of the second user aiming at the target song according to the sound characteristic parameters of the second user and the audio data corresponding to the target song;
associating the second user with the singing audio.
In the practical application process, the second user may be a user who has never issued a song, or may be a user who has never performed a target song, that is, when the second user has not performed the target song, in order to achieve the purpose of the first user and the second user for the target song, and finally obtain the target song audio, according to the sound characteristic parameter of the second user and the audio data corresponding to the target song, the singing audio of the target song is generated, so as to obtain the singing audio corresponding to the target song performed by the second user, and the singing audio of the second user and the target song is associated, and then step S60 is executed, that is, when the second sub-audio corresponding to the song segments other than the chorus segments is generated according to the singing audio associated with the second user, so as to achieve the purpose of the first user and the second user for the target song even if the second user has not performed the target song, the interest of the target song is improved, and a new audio making mode is provided for users who like singing or chorus.
It should be noted that, for a specific implementation of the step of acquiring the sound characteristic parameter of the second user, refer to a specific implementation of the step of acquiring the sound characteristic parameter of the first user in the first embodiment, and for a specific implementation of the step of generating the singing audio of the target song according to the sound characteristic parameter of the second user and the audio data corresponding to the target song, refer to a specific implementation of the step of generating the singing audio of the target song according to the sound characteristic parameter of the first user and the audio data corresponding to the target song in the first embodiment, and detailed description is omitted in this embodiment.
In the technical solution disclosed in this embodiment, by selectively determining the second user who sings the target song with the first user, when the first user has performed the target song, step S60 may be directly performed to obtain the second sub-audio corresponding to the song passage other than the chorus passage performed by the second user; when the first user does not sing the target song, the singing audio of the target song can be generated according to the sound characteristic parameters of the second user and the audio data corresponding to the target song, so that the singing audio of the target song performed by the second user can be obtained, and the singing audio of the second user and the target song is associated, so that the purpose that the first user and the second user can successfully sing the target song even if the second user does not perform the target song is achieved.
As shown in fig. 6, fig. 6 is a schematic diagram of module components of an audio producing apparatus provided by the present invention, and the audio producing apparatus 100 includes:
a first obtaining module 110, configured to obtain a singing audio of a first user for a target song;
a second obtaining module 120, configured to obtain a sound characteristic parameter of the first user, where the sound characteristic parameter includes at least one of a sound ray parameter, a timbre parameter, a pitch parameter, and a vocal range parameter, and the sound characteristic parameter is obtained based on audio data corresponding to the first user when reciting or singing target content of the target song;
and an audio synthesizing module 130, configured to synthesize the singing audio according to the sound characteristic parameter, so as to obtain the singing audio of the first user for the target song.
The specific implementation of the audio production apparatus of the present invention is substantially the same as the embodiments of the audio production method described above, and will not be described herein again.
The invention also proposes a terminal device, comprising: the terminal device comprises a memory, a processor and an audio production program which is stored in the memory and can run on the processor, wherein the audio production program realizes the steps of the audio production method in any embodiment when being executed by the processor of the terminal device.
The invention also proposes a readable storage medium having stored thereon an audio production program which, when executed by a processor, implements the steps of the audio production method according to any of the above embodiments.
In the embodiments of the terminal device and the readable storage medium provided by the present invention, all technical features of the embodiments of the audio production method are included, and the contents of the expansion and the explanation of the specification are basically the same as those of the embodiments of the audio production method, and are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method of audio production, the method comprising:
acquiring singing audio of a first user aiming at a target song;
acquiring a sound characteristic parameter of the first user, wherein the sound characteristic parameter comprises at least one of a sound ray parameter, a tone parameter and a range parameter, and the sound characteristic parameter is acquired based on corresponding audio data of the first user when reciting or singing the target content of the target song;
and synthesizing the singing audio according to the sound characteristic parameters to obtain the singing audio of the first user aiming at the target song.
2. The method of claim 1, wherein the step of obtaining the sound characteristic parameters of the first user comprises:
acquiring keywords respectively corresponding to each song paragraph of the target song, and outputting the keywords, wherein the target content of the target song comprises the keywords;
acquiring audio data corresponding to the first user based on the keywords;
and identifying the audio data to acquire the sound characteristic parameters of the first user.
3. The method of claim 1, wherein the step of synthesizing the singing audio according to the acoustic characteristic parameters to obtain the singing audio of the first user for the target song further comprises:
determining chorus paragraphs matched with the sound characteristic parameters in the target song according to the matching degree of the sound characteristic parameters and the audio characteristic parameters corresponding to the song paragraphs of the target song;
generating a first sub audio corresponding to the chorus paragraph according to the singing audio;
generating second sub-audio corresponding to other song paragraphs except the chorus paragraph according to the singing audio associated with a second user;
and generating a chorus song audio according to the first sub audio and the second sub audio.
4. The method of claim 3, wherein the step of generating the first sub-audio corresponding to the chorus passage based on the singing audio comprises:
acquiring audio data corresponding to the vocal of the chorus paragraph in the singing audio to generate a first sub audio corresponding to the chorus paragraph; alternatively, the first and second electrodes may be,
and deleting the audio data corresponding to other song paragraphs except the chorus paragraph from the singing audio so as to generate a first sub audio corresponding to the chorus paragraph.
5. The method of claim 3, wherein the audio production method further comprises:
determining the second user who sings the target song with the first user;
and when the second user sings the target song, executing the step of generating second sub-audio corresponding to other song paragraphs except the chorus paragraphs according to the singing audio associated with the second user.
6. The method of claim 5, wherein the step of determining the second user who sings the target song with the first user is followed by further comprising:
when the second user does not sing the target song, acquiring a sound characteristic parameter of the second user;
generating the singing audio of the second user aiming at the target song according to the sound characteristic parameters of the second user and the singing audio corresponding to the target song;
associating the second user with the singing audio.
7. The method of claim 3, wherein the second user-associated singing audio is audio recorded by the second user singing the target song.
8. An audio producing apparatus, characterized in that the audio producing apparatus comprises:
the first acquisition module is used for acquiring the singing audio of a first user aiming at a target song;
a second obtaining module, configured to obtain a sound characteristic parameter of the first user, where the sound characteristic parameter includes at least one of a sound ray parameter, a timbre parameter, a pitch parameter, and a vocal range parameter, and the sound characteristic parameter is obtained based on audio data corresponding to the first user when reciting or singing target content of the target song;
and the audio synthesis module is used for synthesizing the singing audio according to the sound characteristic parameters to obtain the singing audio of the first user aiming at the target song.
9. A terminal device, characterized in that the terminal device comprises: memory, processor and an audio production program stored on the memory and executable on the processor, the audio production program, when executed by the processor, implementing the steps of the audio production method according to any one of claims 1 to 7.
10. A readable storage medium, characterized in that the readable storage medium stores an audio production program which, when executed by a processor, realizes the steps of the audio production method according to any one of claims 1 to 7.
CN202210563372.XA 2022-05-20 2022-05-20 Audio production method and device, terminal equipment and readable storage medium Pending CN114974184A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210563372.XA CN114974184A (en) 2022-05-20 2022-05-20 Audio production method and device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210563372.XA CN114974184A (en) 2022-05-20 2022-05-20 Audio production method and device, terminal equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114974184A true CN114974184A (en) 2022-08-30

Family

ID=82986037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210563372.XA Pending CN114974184A (en) 2022-05-20 2022-05-20 Audio production method and device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114974184A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110741430A (en) * 2017-06-14 2020-01-31 雅马哈株式会社 Singing synthesis method and singing synthesis system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110741430A (en) * 2017-06-14 2020-01-31 雅马哈株式会社 Singing synthesis method and singing synthesis system
CN110741430B (en) * 2017-06-14 2023-11-14 雅马哈株式会社 Singing synthesis method and singing synthesis system

Similar Documents

Publication Publication Date Title
CN108369799B (en) Machines, systems, and processes for automatic music synthesis and generation with linguistic and/or graphical icon-based music experience descriptors
CN108806656B (en) Automatic generation of songs
US10229669B2 (en) Apparatus, process, and program for combining speech and audio data
CN108806655B (en) Automatic generation of songs
TW202006534A (en) Method and device for audio synthesis, storage medium and calculating device
CN106708894B (en) Method and device for configuring background music for electronic book
US20070245375A1 (en) Method, apparatus and computer program product for providing content dependent media content mixing
JP2015517684A (en) Content customization
US10325581B2 (en) Singing voice edit assistant method and singing voice edit assistant device
US20140258858A1 (en) Content customization
US9075760B2 (en) Narration settings distribution for content customization
EP2442299B1 (en) Information processing apparatus, information processing method, and program
CN111782576A (en) Background music generation method and device, readable medium and electronic equipment
CN114974184A (en) Audio production method and device, terminal equipment and readable storage medium
CN111666445A (en) Scene lyric display method and device and sound box equipment
CN113178182A (en) Information processing method, information processing device, electronic equipment and storage medium
CN113611268A (en) Musical composition generation and synthesis method and device, equipment, medium and product thereof
JP2002049627A (en) Automatic search system for content
CN113486643B (en) Lyric synthesizing method, terminal device and readable storage medium
WO2024075422A1 (en) Musical composition creation method and program
Merz Composing with all sound using the freesound and wordnik APIs
Tian et al. Homepage and Search Personalization at Spotify
JP6026835B2 (en) Karaoke equipment
KR20240021753A (en) System and method for automatically generating musical pieces having an audibly correct form
CN115700870A (en) Audio data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination