CN112130727A - Chorus file generation method, apparatus, device and computer readable storage medium - Google Patents

Chorus file generation method, apparatus, device and computer readable storage medium Download PDF

Info

Publication number
CN112130727A
CN112130727A CN202011053091.7A CN202011053091A CN112130727A CN 112130727 A CN112130727 A CN 112130727A CN 202011053091 A CN202011053091 A CN 202011053091A CN 112130727 A CN112130727 A CN 112130727A
Authority
CN
China
Prior art keywords
chorus
objects
user
target
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011053091.7A
Other languages
Chinese (zh)
Other versions
CN112130727B (en
Inventor
朱一闻
俞俊伟
俞静
姜冬其
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Cloud Music Technology Co Ltd
Original Assignee
Hangzhou Netease Cloud Music Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Netease Cloud Music Technology Co Ltd filed Critical Hangzhou Netease Cloud Music Technology Co Ltd
Priority to CN202011053091.7A priority Critical patent/CN112130727B/en
Publication of CN112130727A publication Critical patent/CN112130727A/en
Application granted granted Critical
Publication of CN112130727B publication Critical patent/CN112130727B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/113Details of archiving
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/16File or folder operations, e.g. details of user interfaces specifically adapted to file systems
    • G06F16/168Details of user interfaces specifically adapted to file systems, e.g. browsing and visualisation, 2d or 3d GUIs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The embodiment of the application provides a chorus file generation method, a chorus file generation device, electronic equipment and a computer readable storage medium, and relates to the technical field of computers. The method comprises the following steps: when a first interactive operation for triggering the audio recording function to start is detected, displaying a plurality of interactive controls respectively used for representing different recording modes; wherein the plurality of interactive controls comprise chorus controls; if a second interactive operation acting on the chorus control is detected, determining a chorus object to be selected corresponding to the second interactive operation and displaying the chorus object to be selected; and if detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected, recording the user audio and synthesizing an audio file corresponding to the target chorus object with the user audio to obtain a chorus file. Therefore, the application is beneficial to improving the interactivity, arousing and fully utilizing the chorus will of the user, and improving the trigger frequency of the chorus function.

Description

Chorus file generation method, apparatus, device and computer readable storage medium
Technical Field
Embodiments of the present application relate to the field of computer technologies, and in particular, to a chorus file generating method, a chorus file generating apparatus, an electronic device, and a computer-readable storage medium.
Background
The chorus function included in existing singing software usually needs to be triggered through multiple clicking operations of a user. For example, when a user wants to sing a song "xxxx" with a friend, the user needs to select the accompaniment singing of "xxxx" by himself, and then release and disclose the singing work, so that the friend is invited to join in the chorus when browsing the singing work of the user, and the purpose of singing with the singing work can be achieved; or, the user is required to enter the homepage of the friend and select the singing works aiming at XXXXX from the corresponding works for chorus. Therefore, the method has the problems of complex operation and poor interactivity, and results in that the chorus will of the user is reduced, so that the chorus function is triggered less frequently.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present application and therefore does not constitute prior art information known to a person of ordinary skill in the art.
Disclosure of Invention
Based on the above problems, the inventor has made corresponding thinking, made targeted improvements, and provided a chorus file generation method, a chorus file generation device, an electronic device, and a computer-readable storage medium, for solving the problem of tedious operation of triggering the chorus function to start, which is beneficial to improving interactivity, stimulating and fully utilizing the chorus will of the user, and improving the trigger frequency of the chorus function.
According to a first aspect of an embodiment of the present application, a method for generating a chorus file is disclosed, which includes:
when a first interactive operation for triggering the audio recording function to start is detected, displaying a plurality of interactive controls respectively used for representing different recording modes; wherein the plurality of interactive controls comprise chorus controls;
if a second interactive operation acting on the chorus control is detected, determining a chorus object to be selected corresponding to the second interactive operation and displaying the chorus object to be selected;
and if detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected, recording the user audio and synthesizing an audio file corresponding to the target chorus object with the user audio to obtain a chorus file.
In one embodiment, based on the foregoing scheme, the plurality of interactive controls further includes: an interactive control for representing a segment solo recording mode and an interactive control for representing a full segment solo recording mode.
In one embodiment, based on the foregoing scheme, the method further includes:
when the first interactive operation is detected, displaying lyric information corresponding to the target accompaniment;
and keeping displaying lyric information corresponding to the target accompaniment when the second interactive operation or the third interactive operation is detected.
In one embodiment, based on the foregoing scheme, after detecting the second interactive operation acting on the chorus control, the method further includes:
if one or more uploaded files corresponding to the target accompaniment exist, determining one or more release objects corresponding to the one or more uploaded files; wherein the one or more uploaded files comprise audio files, and the one or more published objects comprise target chorus objects;
and determining the chorus object to be selected corresponding to the second interactive operation, wherein the determining comprises the following steps: determining a chorus object to be selected from one or more published objects;
if detecting that one or more uploaded files do not exist, playing a target accompaniment and synchronously recording user audio; and synthesizing the user audio and the target accompaniment into a file to be chorus, and uploading the file to be chorus as an uploaded file corresponding to the target accompaniment to a server.
In one embodiment, based on the foregoing scheme, the method further includes:
displaying a record control of the file to be chorus;
and when detecting the chorus recording operation acting on the to-be-chorus file recording control and not detecting the third interactive operation, playing the target accompaniment and synchronously recording the user audio, and synthesizing the user audio and the target accompaniment to obtain the to-be-chorus file.
In one embodiment, based on the foregoing scheme, determining a chorus object to be selected from one or more published objects includes:
if the number of the issued objects is less than or equal to a preset threshold value, determining the issued objects as chorus objects to be selected;
and if the number of the published objects is larger than a preset threshold value, sequencing the published objects according to the uploading time from late to early, and selecting the published objects with the preset number from the sequencing result as the chorus objects to be selected.
In an embodiment, based on the foregoing solution, if it is detected that at least one associated object exists in the published objects, sorting the published objects according to an order from late to early according to an upload time includes:
grouping the published objects to obtain a first class object group containing at least one associated object and a second class object group containing other published objects; the other release objects are release objects except at least one association object in the release objects;
sequencing the first class object group and the second class object group according to the sequence from late to early of the uploading time to obtain a sequencing result comprising a first sequencing result and a second sequencing result; the first sorting result comprises at least one associated object, the second sorting result comprises other issued objects, and the sequence bit of any associated object in the first sorting result has priority over any other issued object in the second sorting result.
In one embodiment, based on the foregoing scheme, the associated object includes a friend object, and the other release objects include a hot-flip object and/or an original object; the ranking priority of the hot song object is lower than that of the original song object.
In one embodiment, based on the foregoing scheme, after detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected, the method further includes:
if the target chorus object is a friend object, outputting a feedback prompt for indicating that chorus information is fed back to the friend object;
and if the target chorus object is a hot song-turning object or an original song object, outputting an association prompt for prompting the user to establish an association relationship with the hot song-turning object or the original song object.
In one embodiment, based on the foregoing scheme, displaying the chorus object to be selected includes:
determining the object type of the chorus object to be selected, and generating description information corresponding to the chorus object to be selected according to the object type; the object types comprise friend objects, popular song-turning objects and original song objects;
and displaying the chorus object to be selected and the description information.
In an embodiment, based on the foregoing scheme, before recording the user audio, the method further includes:
playing the audio file in a preview mode;
and if the confirmation operation aiming at the audio file is detected, executing recording the audio of the user.
In an embodiment, based on the foregoing scheme, before recording the user audio, the method further includes:
generating at least one recording sub-mode, and determining a target sub-mode from the at least one recording sub-mode according to a first selection operation;
displaying lyric information under a target sub-mode; the lyric information under the target sub-mode comprises highlighted recording parts, and at least one recording sub-mode corresponds to different recording parts;
and if the second selection operation aiming at the target sub-mode is detected, executing recording of the user audio.
In one embodiment, based on the foregoing scheme, the third interactive operation is performed on the target chorus object or on the randomly selected control, and if the third interactive operation is performed on the randomly selected control, before recording the user audio, the method further includes:
and randomly selecting a chorus object from chorus objects to be selected as a target chorus object.
According to a second aspect of the embodiments of the present application, there is disclosed a chorus file generating apparatus, comprising: record mode display element, pending chorus object display element, audio recording unit and chorus file synthesis unit of selecting, wherein:
the recording mode display unit is used for displaying a plurality of interactive controls which are respectively used for representing different recording modes when a first interactive operation which triggers the starting of the audio recording function is detected; wherein the plurality of interactive controls comprise chorus controls;
the chorus object display unit to be selected is used for determining a chorus object to be selected corresponding to the second interactive operation and displaying the chorus object to be selected when the second interactive operation acting on the chorus control is detected;
the audio recording unit is used for recording the user audio when detecting a third interactive operation for selecting a target chorus object from chorus objects to be selected;
and the chorus file synthesis unit is used for synthesizing the audio file corresponding to the target chorus object with the user audio to obtain a chorus file.
In one embodiment, based on the foregoing scheme, the plurality of interactive controls further includes: an interactive control for representing a segment solo recording mode and an interactive control for representing a full segment solo recording mode.
In one embodiment, based on the foregoing solution, the apparatus further includes:
the information display unit is used for displaying lyric information corresponding to the target accompaniment when the first interactive operation is detected; and keeping displaying lyric information corresponding to the target accompaniment when the second interactive operation or the third interactive operation is detected.
In one embodiment, based on the foregoing scheme, wherein:
the object determining unit is used for determining one or more release objects corresponding to one or more uploaded files if one or more uploaded files corresponding to the target accompaniment exist after the chorus object display unit to be selected detects the second interactive operation acting on the chorus control; wherein the one or more uploaded files comprise audio files, and the one or more published objects comprise target chorus objects;
and the chorus object display unit to be selected determines the chorus object to be selected corresponding to the second interactive operation, and the chorus object display unit to be selected comprises the following steps: determining a chorus object to be selected from one or more published objects;
the audio recording unit is also used for playing the target accompaniment and synchronously recording the user audio when the object determining unit detects that one or more uploaded files do not exist; and synthesizing the user audio and the target accompaniment into a file to be chorus, and uploading the file to be chorus as an uploaded file corresponding to the target accompaniment to a server.
In one embodiment, based on the foregoing solution, the apparatus further includes:
the control display unit is used for displaying a recording control of the file to be chorus;
and the audio recording unit is also used for playing the target accompaniment and synchronously recording the user audio when detecting the chorus recording operation acting on the to-be-chorus file recording control and not detecting the third interactive operation, and synthesizing the user audio and the target accompaniment to obtain the to-be-chorus file.
In one embodiment, based on the foregoing scheme, the determining, by the to-be-selected chorus object presentation unit, a to-be-selected chorus object from one or more published objects includes:
if the number of the published objects is less than or equal to a preset threshold value, the chorus object display unit to be selected determines the published objects as chorus objects to be selected;
and if the number of the published objects is larger than the preset threshold value, the chorus object display unit to be selected sorts the published objects according to the sequence from late to early of the uploading time, and selects the published objects with the preset number from the sorting result as the chorus objects to be selected.
In one embodiment, based on the foregoing scheme, if it is detected that at least one associated object exists in the published objects, the to-be-selected chorus object presentation unit sorts the published objects according to the order from late to early of the upload time, including:
the chorus object display unit to be selected groups the issued objects to obtain a first class object group containing at least one associated object and a second class object group containing other issued objects; the other release objects are release objects except at least one association object in the release objects;
the chorus object display unit to be selected sorts the first class object group and the second class object group according to the order from late to early of the uploading time to obtain a sorting result comprising a first sorting result and a second sorting result; the first sorting result comprises at least one associated object, the second sorting result comprises other issued objects, and the sequence bit of any associated object in the first sorting result has priority over any other issued object in the second sorting result.
In one embodiment, based on the foregoing scheme, the associated object includes a friend object, and the other release objects include a hot-flip object and/or an original object; the ranking priority of the hot song object is lower than that of the original song object.
In one embodiment, based on the foregoing solution, the apparatus further includes:
the prompt output unit is used for outputting a feedback prompt for indicating that chorus messages are fed back to the friend object if the target chorus object is the friend object after the audio recording unit detects a third interactive operation for selecting the target chorus object from the chorus objects to be selected;
and the prompt output unit is also used for outputting an association prompt for prompting the user to establish an association relationship with the hot singing object or the original singing object when the target choring object is the hot singing object or the original singing object.
In one embodiment, based on the foregoing scheme, the displaying unit of the chorus object to be selected displays the chorus object to be selected, including:
the chorus object display unit to be selected determines the object type of the chorus object to be selected, and generates the description information corresponding to the chorus object to be selected according to the object type; the object types comprise friend objects, popular song-turning objects and original song objects;
and displaying the chorus object to be selected and the description information.
In one embodiment, based on the foregoing solution, the apparatus further includes:
the preview mode playing unit is used for playing the audio file in a preview mode before the audio recording unit records the audio of the user;
and the audio recording unit is specifically used for recording the user audio when the confirmation operation aiming at the audio file is detected.
In one embodiment, based on the foregoing solution, the apparatus further includes:
the recording mode generating unit is used for generating at least one recording sub-mode before the audio recording unit records the user audio, and determining a target sub-mode from the at least one recording sub-mode according to a first selection operation;
the lyric information display unit is used for displaying the lyric information under the target sub-mode; the lyric information under the target sub-mode comprises highlighted recording parts, and at least one recording sub-mode corresponds to different recording parts;
and the audio recording unit is specifically used for recording the user audio when the confirmation operation aiming at the audio file is detected and the second selection operation aiming at the target sub-mode is detected.
In one embodiment, based on the foregoing scheme, the third interactive operation acts on the target chorus object or on the random selection control, and if the third interactive operation acts on the random selection control, the apparatus further includes:
and the random selection unit is used for randomly selecting a chorus object from the chorus objects to be selected as a target chorus object before the audio recording unit records the audio of the user.
According to a third aspect of embodiments of the present application, there is disclosed an electronic device comprising: a processor; and a memory having computer readable instructions stored thereon, the computer readable instructions, when executed by the processor, implementing the chorus file generation method as disclosed in the first aspect.
According to a fourth aspect of embodiments of the present application, a computer program medium is disclosed, having computer readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the chorus file generation method disclosed according to the first aspect of the present application.
According to the method and the device, when a first interactive operation for triggering the audio recording function to be started is detected, a plurality of interactive controls respectively used for representing different recording modes are displayed; wherein the plurality of interactive controls comprise chorus controls; if a second interactive operation acting on the chorus control is detected, determining a chorus object to be selected corresponding to the second interactive operation and displaying the chorus object to be selected; and if detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected, recording the user audio and synthesizing an audio file corresponding to the target chorus object with the user audio to obtain a chorus file. Compared with the prior art, the implementation of the embodiment of the application can simplify the user operation and reduce the chorus triggering difficulty on one hand, thereby realizing convenient and fast chorus triggering; on the other hand, the method is favorable for improving the interactivity, timely grasps the opportunity for creating (singing) the passion of the user and effectively utilizes the opportunity, and on the basis, the social relationship among the users can be strengthened, the interaction frequency among the users is improved, and therefore the use viscosity of the users is improved.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 is a flowchart illustrating a chorus file generation method according to an exemplary embodiment of the present application;
FIG. 2 illustrates a schematic interface diagram in a chorus recording mode according to an exemplary embodiment of the present application;
FIG. 3 illustrates a schematic interface diagram in a full solo recording mode according to an example embodiment of the present application;
FIG. 4 illustrates a schematic interface diagram in the absence of one or more uploaded files according to an example embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an interface after triggering a random selection control according to an example embodiment of the application;
FIG. 6 illustrates an interface diagram for demonstrating a recording sub-mode according to an example embodiment of the present application;
FIG. 7 illustrates an interface diagram for presenting another recording sub-mode according to an example embodiment of the present application;
FIG. 8 is a schematic diagram of an interface for receiving a feedback prompt according to an example embodiment of the present application;
FIG. 9 is a schematic diagram of an interface when the target chorus object is a hit or miss object according to an example embodiment of the present application;
FIG. 10 is a schematic flow diagram illustrating a chorus file generation method according to another example embodiment of the present application;
fig. 11 is a block diagram showing a structure of a chorus file generating apparatus according to an exemplary embodiment of the present application;
fig. 12 is a block diagram illustrating a structure of a chorus file generating apparatus according to another alternative exemplary embodiment of the present application.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present application will be described with reference to a number of exemplary embodiments. It should be understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present application, and are not intended to limit the scope of the present application in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one of skill in the art, embodiments of the present application may be embodied as an apparatus, device, method, or computer program product. Thus, the present application may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the application, a chorus file generation method, a chorus file generation device, an electronic device and a computer readable storage medium are provided.
Any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present application are explained in detail below with reference to several representative embodiments of the present application.
Summary of The Invention
The 'chorus' function included in the existing singing software generally requires that a user selects accompanying singing by himself and then releases and discloses a singing work, so that an interested user is invited to add chorus when browsing the singing work of the user; or, the user needs to enter a friend who wants to sing or a homepage of the original singing, and then selects the works which want to sing from the corresponding works, so that the purpose of singing can be achieved.
The two modes are complicated, and the user usually needs to perform multiple operations to realize the chorus purpose, so that the chorus will of the user is easily reduced, and the interactivity is influenced. Moreover, the applicant has found through intensive research that there are experience faults in the actual use of the two ways of triggering chorus: on one hand, the user basically has the creation (singing) desire when selecting the accompaniment to sing, but at the moment, the existing scheme only allows the user to sing the semi-finished product form but contradicts the expectation of the user to create the complete work, and the user experience is poor; on the other hand, when browsing the works of other users, the creative (singing) will not be high, so the chorus adding frequency is not high, which results in the low overall chorus triggering frequency in the prior art and the weak user viscosity.
Based on the above problems, the applicant thinks that a 'chorus' function can be added in a singing page, so that when a user triggers the audio recording function to start, a plurality of recording modes are displayed to the user, and the user can select a mode of recording by one person alone or a chorus recording mode from the plurality of recording modes.
In the chorus recording mode, the user can also select a target chorus object needing chorus from chorus objects to be selected, on one hand, the user operation can be simplified, the chorus function triggering difficulty is reduced, and therefore convenient and fast chorus triggering can be achieved; on the other hand, the method is favorable for improving the interactivity, timely grasps the opportunity for creating (singing) the passion of the user and effectively utilizes the opportunity, and on the basis, the social relationship among the users can be strengthened, the interaction frequency among the users is improved, and therefore the use viscosity of the users is improved.
Application scene overview
It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
The embodiment of the application can be applied to singing software, and after a user selects a song to be sung in the singing software, the first interactive operation for triggering the audio recording function to be started can be judged and detected, so that various recording modes (such as a segment solo recording mode, a full-segment solo recording mode and a chorus mode) can be displayed for the user, and the user can select the corresponding mode according to requirements.
And if the user selects the chorus mode, determining the chorus object to be selected and displaying the chorus object to be selected to the user so that the user can select the target chorus object which is wanted to sing. Furthermore, the user audio can be recorded and the audio file corresponding to the target chorus object is synthesized with the user audio to obtain the chorus file.
According to the scheme, a mode of rapidly obtaining the chorus file can be provided, the interactivity can be improved while the user operation is simplified, the user can efficiently obtain the required chorus file, the use experience of the user is favorably improved, and the use viscosity of the user is improved.
Exemplary method
In combination with the application scenarios described above, a chorus file generation method according to an exemplary embodiment of the present application is described below with reference to fig. 1 to 10.
Referring to fig. 1, fig. 1 is a flowchart illustrating a chorus file generating method according to an exemplary embodiment of the present application, where the chorus file generating method may be implemented by a server or a terminal device.
As shown in fig. 1, a method for generating a chorus file according to an embodiment of the present application includes:
step S110: when a first interactive operation for triggering the audio recording function to start is detected, displaying a plurality of interactive controls respectively used for representing different recording modes; wherein, the plurality of interactive controls comprise chorus controls.
Step S120: and if the second interactive operation acting on the chorus control is detected, determining the chorus object to be selected corresponding to the second interactive operation and displaying the chorus object to be selected.
Step S130: and if detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected, recording the user audio and synthesizing an audio file corresponding to the target chorus object with the user audio to obtain a chorus file.
By implementing the chorus file generation method shown in fig. 1, on one hand, the user operation can be simplified, and the chorus triggering difficulty is reduced, so that convenient and fast chorus triggering can be realized; on the other hand, the method is favorable for improving the interactivity, timely grasps the opportunity for creating (singing) the passion of the user and effectively utilizes the opportunity, and on the basis, the social relationship among the users can be strengthened, the interaction frequency among the users is improved, and therefore the use viscosity of the users is improved.
These steps are described in detail below.
In step S110, when a first interactive operation triggering the start of the audio recording function is detected, a plurality of interactive controls respectively representing different recording modes are displayed; wherein, the plurality of interactive controls comprise chorus controls.
Specifically, the first interactive operation, the second interactive operation, the third interactive operation, the chorus recording operation, the confirmation operation, the first selection operation, and the second selection operation, which are described below, may all be user operations, and the user operations may be click operations, touch screen operations, gesture operations, or voice control operations, which is not limited in the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating an interface in a chorus recording mode according to an exemplary embodiment of the present application. As shown in fig. 2, the user interface 200 may include: a song information display area 210, a lyric information display area 220, a confirmation control 230, a chorus object display area 240 to be selected, a random selection control 250, a chorus file recording control 260, an interaction control 270 for representing a segment chorus recording mode, an interaction control 280 for representing a full-segment chorus recording mode, and a chorus control 290.
The elements in fig. 2 are further defined below.
The song information display area 210 is configured to display song information corresponding to the target accompaniment, where the song information may include a song name, a singer, an album name, and the like.
And a lyric information display area 220 for displaying lyric information corresponding to the target accompaniment, wherein the lyric information display mode may be a scrolling display/page-turning display.
Confirmation control 230, shown as "next" in FIG. 2, may trigger the execution of the recorded user audio described above when a confirmation operation is detected that acts on confirmation control 230.
The chorus object display area 240 to be selected includes: a chorus object 241 to be selected, a chorus object 242 to be selected, a chorus object 243 to be selected, and a description information display area 2411. The chorus object 241 to be selected, the chorus object 242 to be selected, and the chorus object 243 to be selected can be interactive.
When a first interactive operation triggering the audio recording function to start is detected, a user interface for displaying the singing function can be output. Further, optionally, a target accompaniment corresponding to the song information in the song information display region 210 may be played and lyric information corresponding to the target accompaniment may be displayed; or only displaying the lyric information corresponding to the target accompaniment.
Further, when the second interactive operation acting on the chorus control 290 is detected, the chorus object 241 to be selected, the chorus object 242 to be selected, and the chorus object 243 to be selected corresponding to the second interactive operation may be determined and displayed, and the chorus control 290 (as shown in fig. 2) may be highlighted to indicate that the chorus control 290 has been selected by the user.
Further, when a third interactive operation (i.e., an operation of clicking the avatar identifier of the chorus object to be selected by the user) for selecting a target chorus object from the chorus object to be selected 241, the chorus object to be selected 242, and the chorus object to be selected 243 is detected, it is determined that the audio file (i.e., the singing work) corresponding to the target accompaniment by the target chorus object is selected by the user.
Further, the currently played target accompaniment can be switched into an audio file corresponding to the target chorus object; illustratively, in the user interface 200, the target chorus object is the chorus object 241 to be selected. It should be noted that, when the second interactive operation or the third interactive operation is detected, lyric information corresponding to the target accompaniment is kept displayed.
In fig. 2, the audio file currently played in the preview mode is an audio file corresponding to the chorus object 241 to be selected, and the identifier of the chorus object 241 to be selected includes playing identifiers different from the chorus object 242 to be selected and the chorus object 243 to be selected. In addition, the descriptive information display area 2411 is used for displaying the descriptive information "xiao a sings the song in 24 days" corresponding to the chorus object 241 to be selected.
Furthermore, if it is detected that the user interacts with the chorus object 242 to be selected/chorus object 243 to be selected, the chorus object 242 to be selected/chorus object 243 to be selected is determined as a new target chorus object, the audio file corresponding to the target chorus object played in the preview mode is switched to the audio file corresponding to the chorus object 242 to be selected/chorus object 243 to be selected, lyric information corresponding to the target accompaniment is kept displayed, the playing identifier in the chorus object 243 to be selected is deleted, and the playing identifier is added to the chorus object 242 to be selected/chorus object 243 to be selected. Further, the descriptive information "xiao a sings the song in 24 days" displayed in the descriptive information display area 2411 is switched to the descriptive information corresponding to the chorus object to be selected 242/chorus object 243. In addition, the chorus object 241 to be selected, the chorus object 242 to be selected, and the chorus object 243 to be selected can be identified as controls through different user avatars.
And the random selection control 250 is used for randomly selecting a chorus object from the chorus objects to be selected as a target chorus object after being triggered. Specifically, when the user has no willingness to chorus with respect to the chorus object to be selected, a target chorus object can be randomly determined by triggering the random selection control 250.
The to-be-chorus file recording control 260, which is denoted as "initiate chorus" in fig. 2, may execute the following play target accompaniment and synchronously record the user audio when detecting the chorus recording operation acting on the to-be-chorus file recording control 260 and not detecting the third interactive operation; wherein, the target accompaniment may be the original singing accompaniment.
As an optional implementation, the method further includes: displaying a record control of the file to be chorus; and when detecting the chorus recording operation acting on the to-be-chorus file recording control and not detecting the third interactive operation, playing the target accompaniment and synchronously recording the user audio, and synthesizing the user audio and the target accompaniment to obtain the to-be-chorus file.
Specifically, if the chorus recording operation acting on the to-be-chorus file recording control 260 is detected and the third interactive operation is not detected, it is determined that the user needs to generate the to-be-chorus file. After synthesizing the user audio and the target accompaniment to obtain the file to be chorus, the method can also comprise the following steps: and uploading the file to be chorus as an uploaded file corresponding to the target accompaniment to a server.
Therefore, the optional implementation mode can provide the function of generating the files to be chorus, provide richer chorus selections for the user and be beneficial to improving the use experience of the user.
The interactive control 270 for indicating the solo recording mode of the clip is indicated as "album clip 00: 40" in fig. 2, 00:40 is the duration of the clip to be sung of the target accompaniment, and the clip to be sung may be an accompaniment clip of the chorus portion. When the user triggers the interactive control 270, the segment to be sung can be played and the user audio can be synchronously recorded, so that the segment to be sung and the user audio are synthesized into a segment singing file, and the segment singing file is uploaded to the server as an uploaded file corresponding to the target accompaniment.
The interactive control 280 for indicating the whole verse recording mode is indicated in fig. 2 as "singing the entire head 02: 30", and the 02:30 is the duration of the target accompaniment. When the user triggers the interactive control 280, the complete target accompaniment can be played and the user audio can be synchronously recorded, so that the complete target accompaniment and the user audio can be synthesized into a complete singing file, and the complete singing file is uploaded to the server as an uploaded file corresponding to the target accompaniment.
The chorus control 290 is represented in fig. 2 as "and TA sing 00: 26", with 00:26 being the duration of the audio file. When the user triggers the interactive control 290, the target accompaniment corresponding to the lyric information can be played and the lyric information can be displayed, and the chorus object 241 to be selected, the chorus object 242 to be selected and the chorus object 243 to be selected can be displayed, so that the user can select the target chorus object from the chorus object. After the user selects the chorus object 241 to be selected as the target chorus object, the audio file corresponding to the target chorus object can be played and the lyric information is kept displayed. Further, when it is detected that the user triggers the confirmation control 230, fig. 6 or fig. 7 may be displayed to further detect the first selection operation, record the user audio according to a target sub-mode (e.g., a first singing mode or a second singing mode) corresponding to the detected first selection operation, further synthesize the audio file corresponding to the target chorus object and the user audio into a to-be-chorus file, and upload the to-be-chorus file to the server as an uploaded file corresponding to the target accompaniment.
As an optional implementation, the plurality of interactive controls further includes: an interactive control for representing a segment solo recording mode and an interactive control for representing a full segment solo recording mode.
Specifically, the segment solo recording mode is used for providing a segment solo function, the whole-segment solo recording function is used for providing a whole-segment solo function, and the segment solo recording mode and the whole-segment solo recording function are the same as the rest of the corresponding user interface except the corresponding lyric information.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating an interface in a full solo recording mode according to an exemplary embodiment of the present application. As shown in fig. 3, the user interface 300 may include: song information presentation area 310, lyrics information presentation area 320, confirmation control 330, interaction control 340 for representing a segment solo recording mode, interaction control 350 for representing a full segment solo recording mode, and chorus control 360.
In particular, upon detecting an interactive operation acting on interactive control 350, user interface 300, as shown in FIG. 3, may be presented. In the user interface 300, the song information presentation area 310 may be used to present song information, such as song title, singer, album title, etc., to the user. The lyric information presentation area 320 may be used to present lyrics of an entire song to a user. The user can pre-enjoy the entire song to be sung through user interface 300, playing the target accompaniment and synchronously recording the user audio when a confirmation operation is detected that acts on confirmation control 330, denoted "next". Then, when the fact that the target accompaniment is played completely is detected, audio recording is finished, recorded user audio and the target accompaniment are synthesized, a synthesis result is uploaded to a server, and the server constructs a binding relationship between the synthesis result and the target accompaniment; the synthesis result is an uploaded file corresponding to the target accompaniment, and the synthesis result can be used as an audio file of other users in a chorus recording mode.
Note that the target accompaniment in the full-segment solo recording mode is the accompaniment of the entire song, and the target accompaniment in the segment solo recording mode is the accompaniment of a certain segment of the entire song.
Therefore, by implementing the optional implementation mode, multiple singing modes can be provided for the user, and the improvement of interactivity is facilitated. In addition, the chorus mode can be integrated in the commonly-arranged solo recording mode, so that the user can be conveniently and timely satisfied when the chorus requirement exists, and the use viscosity of the user is favorably improved. Compared with the prior art, the method has the advantages that the user does not need to enter the chorus mode through multiple clicks (such as clicking 'friend homepage', 'friend singing work', 'chorus with friend singing work'), and therefore the use frequency of the chorus function is improved.
In step S120, if a second interactive operation acting on the chorus control is detected, a chorus object to be selected corresponding to the second interactive operation is determined and displayed. Referring to fig. 2, for example, the chorus objects to be selected corresponding to the second interactive operation may be a chorus object to be selected 241, a chorus object to be selected 242, and a chorus object to be selected 243.
Specifically, the number of the chorus objects to be selected may be one or more, which is not limited in the embodiment of the present application, and the chorus objects to be selected may be displayed in the chorus object display area 240.
In addition, the manner of displaying the chorus object to be selected may be: and displaying the interactive identification corresponding to the chorus object to be selected. For example, in fig. 2, the interactive identifications of the chorus object 241 to be selected, the chorus object 242 to be selected, and the chorus object 243 to be selected are their corresponding user avatars respectively; since the chorus object 241 to be selected is the target chorus object, the interactive mark of the chorus object 241 to be selected can be determined as the target mark. Further, the method may further include the steps of: when detecting an object interaction operation acting on an interactive identifier in the chorus object display area 240 to be selected, determining the interactive identifier as a target identifier; and the chorus object to be selected corresponding to the target identification is a target chorus object.
Based on this, the following specific embodiments may be made for playing the audio file in the preview mode: and playing the audio file corresponding to the target identification in a preview mode.
Further, the method may further include the steps of: when the object interactive operation acting on the target identifier is detected again, the playing of the audio file is paused until the audio file can be continuously played when the next object interactive operation acting on the target identifier is detected.
As an optional implementation manner, after detecting the second interactive operation acting on the chorus control, the method further includes:
if one or more uploaded files corresponding to the target accompaniment exist, determining one or more release objects corresponding to the one or more uploaded files; wherein the one or more uploaded files comprise audio files, and the one or more published objects comprise target chorus objects.
Based on this, for the above determining the chorus object to be selected corresponding to the second interactive operation, the specific implementation may be: the chorus object to be selected is determined from the one or more published objects, and referring to fig. 2, for example, the chorus object to be selected 241, the chorus object to be selected 242, and the chorus object to be selected 243 may be determined from the one or more published objects.
Further, if detecting that one or more uploaded files do not exist, playing a target accompaniment and synchronously recording the user audio; and synthesizing the user audio and the target accompaniment into a file to be chorus, and uploading the file to be chorus as an uploaded file corresponding to the target accompaniment to a server.
Specifically, the uploaded file may be a singing work corresponding to the target accompaniment, and the uploaded file may be a work to be sung (i.e., the above-mentioned file to be sung), a singing work of an entire song, or a singing work of a song segment.
In addition, the uploaded files and the release objects are in a one-to-one correspondence, or a many-to-one correspondence that a plurality of uploaded files correspond to one release object may be adopted, that is, one release object may simultaneously correspond to uploaded files such as works to be sung, singing works of an entire song, and singing works of song segments.
Based on this, optionally, after determining the chorus object to be selected from the one or more published objects, the following steps may be further included: if a plurality of uploaded files corresponding to the chorus object 241 to be selected, the chorus object 242 to be selected or the chorus object 243 to be selected are detected, the uploaded files with the first sequence order are selected from the uploaded files according to the sequence of the uploading time from early to late as audio files corresponding to the chorus object 241 to be selected, the chorus object 242 to be selected or the chorus object 243 to be selected. When the chorus object 241 to be selected, the chorus object 242 to be selected, or the chorus object 243 to be selected is the target chorus object, the user can record chorus based on the audio file corresponding to the chorus object 241 to be selected, the chorus object 242 to be selected, or the chorus object 243 to be selected.
In addition, the audio files included in the one or more uploaded files are the audio files corresponding to the target chorus object.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating an interface when one or more uploaded files do not exist according to an example embodiment of the present application. When it is detected that there are no one or more uploaded files, a user interface 400 as shown in fig. 4 may be displayed, the user interface 400 may include: song information presentation area 410, lyrics information presentation area 420, confirmation control 430, interaction control 440 for representing a segment solo recording mode, interaction control 450 for representing a full segment solo recording mode, chorus control 460, and descriptive information presentation area 470.
In particular, detecting the absence of one or more uploaded files may indicate that no user has uploaded the corresponding singing work for the target accompaniment. In the case that there are no one or more uploaded files, the user interface 400 may also display the description information "no one has sung the song yet and soon went to initiate a chorus bar" through the description information display area 470 while displaying the lyric information to prompt the user to record the file to be chorus. Furthermore, after the files to be chorus are uploaded to the server as uploaded files corresponding to the target accompaniment, when other users trigger the chorus control, chorus recording can be carried out based on the files to be chorus recorded by the current user. The files to be chorus of the current user can become uploaded files corresponding to the target accompaniment for other users.
Therefore, the implementation of the optional implementation mode can provide two chorus modes, on one hand, a user can record chorus based on files to be chorus (namely the uploaded files) uploaded by other users; on the other hand, the user can also record the files to be chorus as uploaded files to upload the server so as to provide the files to be chorus for other users, thus improving the interaction diversity, improving the use experience of the user and being beneficial to improving the use viscosity of the user.
As an optional implementation, determining a chorus object to be selected from one or more published objects includes: if the number of the issued objects is less than or equal to a preset threshold (e.g. 3), determining the issued objects as chorus objects to be selected; if the number of the published objects is larger than the preset threshold value, the published objects are sorted according to the uploading time from late to early, and the published objects with the preset number (for example, 3) are selected from the sorting result as chorus objects to be selected.
Specifically, the method for sorting the published objects according to the order from late to early of the upload time may be: and determining the uploading time of each uploaded file corresponding to the target accompaniment, and sequencing the release objects according to the sequence from late to early of the uploading time.
Furthermore, the mode of selecting a preset number of publishing objects from the sorting result as the chorus objects to be selected may be: selecting a preset number of published objects from the sequencing result as chorus objects to be selected; or, respectively selecting a preset number of published objects as the chorus objects to be selected from the first sorting result and the second sorting result contained in the sorting result.
Therefore, the optional implementation mode can be implemented to display the chorus object to be selected to the current user according to the uploading time, and if the current user can interact with the chorus object to be selected, the application viscosity of the chorus object to be selected is favorably improved.
As an optional implementation manner, if it is detected that at least one associated object exists in the published objects, sorting the published objects according to a sequence from late to early of the upload time includes: grouping the published objects to obtain a first class object group containing at least one associated object and a second class object group containing other published objects; and sequencing the first class of object group and the second class of object group according to the sequence of the uploading time from late to early to obtain a sequencing result comprising a first sequencing result and a second sequencing result.
The first sorting result comprises at least one associated object, the second sorting result comprises other issued objects, and the sequence bit of any associated object in the first sorting result has priority over any other issued object in the second sorting result. The association object has an association relation with the current user, and the other publishing objects are publishing objects except at least one association object in the publishing objects. The associated objects comprise friend objects, and other release objects comprise hot song-turning objects and/or original song objects; the ranking priority of the hot song object is lower than that of the original song object.
Specifically, the two-class object group containing other published objects may include a hot-flip object and/or an original object. The associated object may include at least one friend object, and the judgment criterion of the friend object may be: if the binding relationship exists between the user ID of the user A and the current user ID, the user A is judged to be a friend object, and the friend object can be represented as a user concerned with the current user in the user interface.
In addition, optionally, before sorting the first-class object group and the second-class object group according to the order from late to early according to the uploading time, the method may further include: detecting whether the number of the associated objects in the first class of object group meets a preset threshold value; if so, sequencing the class of object groups according to the sequence from late to early of the uploading time to obtain a first sequencing result, and selecting a preset number of associated objects from the first sequencing result to be used as chorus objects to be selected for display; if not, sorting the first-class object group and the second-class object group according to the sequence from late to early of the uploading time.
Therefore, by implementing the optional implementation mode, different types of published objects can be sorted according to the priority, so that the published object with the front positive integer number is selected from the sorting result as the chorus object to be selected, and the chorus will and the interaction will of the user can be promoted.
As an optional implementation, displaying the chorus object to be selected includes: determining the object type of the chorus object to be selected, and generating description information corresponding to the chorus object to be selected according to the object type; the object types comprise friend objects, popular song-turning objects and original song objects; and displaying the chorus object to be selected and the description information.
In particular, the descriptive information may be used to describe behavior (e.g., XiaoA sings the song 24 days ago), may be used to describe status (e.g., XiaoB online), or may be pre-set text (e.g., sing a song with me coming). For example, the description information corresponding to the friend object may be "xxx sung the song 5 days ago", the description information corresponding to the hot-flip object may be "chorus bar together with king", and the description information corresponding to the original singing object may be "singing bar together with xxx".
Optionally, if the object type is a friend object, the manner of generating the description information corresponding to the chorus object to be selected according to the object type may be: and generating description information corresponding to the chorus object to be selected according to the nickname corresponding to the friend object and the uploading time of the uploaded file corresponding to the friend object.
Therefore, by implementing the optional implementation mode, the corresponding description information can be generated in a personalized manner according to the object type, and the chorus will of the user can be promoted, so that the chorus purpose of the user in the singing function can be realized.
In step S130, if a third interactive operation for selecting a target chorus object from the chorus objects to be selected is detected, recording the user audio and synthesizing an audio file corresponding to the target chorus object with the user audio to obtain a chorus file. Referring to fig. 2, for example, the third interactive operation may be used to select a target chorus object (e.g., the chorus object 241 to be selected) from the chorus object 241 to be selected, the chorus object 242 to be selected, and the chorus object 243 to be selected.
Specifically, the audio file corresponding to the target chorus object includes: one or more sung parts and one or more parts to be sung; the singing part is the part sung by the target chorus object, and the singing part comprises the voice of the target chorus object and the accompaniment corresponding to the part; the part to be sung only comprises the accompaniment corresponding to the part.
Further, optionally, after synthesizing the audio file corresponding to the target chorus object with the user audio to obtain the chorus file, the method may further include: synthesizing the audio of the user and the part to be sung of the audio file to obtain a chorus file; the chorus file comprises the voice of the current user, the voice of the target chorus object and the target accompaniment.
As an alternative implementation, the third interactive operation is performed on the target chorus object or on the randomly selected control, and if the third interactive operation is performed on the randomly selected control, before recording the audio of the user, the method further includes: and randomly selecting a chorus object from chorus objects to be selected as a target chorus object.
Specifically, the manner of randomly selecting one chorus object from the chorus objects to be selected as the target chorus object may be: and randomly selecting a chorus object from the chorus objects to be selected which are not in the display area as a target chorus object.
Optionally, before recording the user audio, the following steps may be further performed: when detecting a user operation for triggering and updating the chorus object to be selected in the chorus object display area 240 to be selected, selecting a preset number of replacement objects from the release objects which are not in the display area, and replacing the chorus object to be selected in the display area by the replacement objects; and discarding the replaced chorus objects to be selected which are in the display area. Therefore, the selection diversity can be enriched, when the user is not satisfied with the currently displayed chorus object to be selected, other chorus objects to be selected can be selected from the selected chorus objects not displayed and displayed to the user, so that the user can select the preferred target chorus object, and the interactivity and the user use viscosity are improved.
For example, when the user has no willingness to chorus for the chorus object 241 to be selected, the chorus object 242 to be selected, and the chorus object 243 displayed in the chorus object display area 240 to be selected, a left-sliding operation or a right-sliding operation may be performed in the chorus object display area 240 to update the chorus object to be selected displayed in the chorus object display area 240.
When the left-sliding operation or the right-sliding operation is detected, a preset number of replacement objects may be randomly selected from the release objects that are not displayed in the to-be-selected chorus object display area 240 to replace the to-be-selected chorus object 241, the to-be-selected chorus object 242, and the to-be-selected chorus object 243, so as to update the to-be-selected chorus object display area 240.
Further, the user may select a target chorus object from the updated chorus object presentation area 240 to be selected.
It should be noted that the left-slide operation and the right-slide operation may be the above-mentioned user operations for triggering and updating the chorus object to be selected in the presentation area.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating an interface after a random selection control is triggered according to an example embodiment of the present application. As shown in fig. 5, the user interface 500 may include: a song information display area 510, a lyrics information display area 520, a confirmation control 530, a chorus object display area 540 to be selected, a random selection control 550, a chorus file recording control 560, an interaction control 570 for representing a segment chorus recording mode, an interaction control 580 for representing a full segment chorus recording mode, and a chorus control 590. The to-be-selected chorus object display area 540 may include a to-be-selected chorus object 541, a to-be-selected chorus object 542, a to-be-selected chorus object 543, and a description information display area 5411. The chorus object 541 to be selected, the chorus object 542 to be selected, and the chorus object 543 to be selected can interact with each other.
When the third interactive operation acts on the target chorus object, the user interface may be as shown in fig. 2; when the third interactive operation acts on the random selection control, the user interface may be as shown in FIG. 5. Fig. 2 differs from fig. 5 in describing an information presentation area. In fig. 5, a descriptive information presentation area 5411 corresponds to the random selection control 550, and the descriptive information "fast come and sing with me" is included in the descriptive information presentation area 5411. In fig. 2, the description information display area 2411 corresponds to the chorus object 241 to be selected, and the chorus object 241 to be selected is an object selected by the user through the third interactive operation, that is, the above-mentioned target chorus object.
Therefore, the optional implementation mode can provide the function of randomly selecting the chorus object for the user, and the problem that the user is difficult to decide is solved.
As an optional implementation, before recording the user audio, the method further includes: playing the audio file in a preview mode; and if the confirmation operation aiming at the audio file is detected, executing recording the audio of the user.
Specifically, the preview mode is used for playing an audio file corresponding to the chorus object to be selected in advance and synchronously displaying lyric information for the user to enjoy. If the user approves the audio file in the preview mode and performs the confirmation operation on the audio file, the chorus object to be selected corresponding to the audio file can be determined as the target chorus object.
Therefore, the implementation of the alternative embodiment can provide a preview function for the user, and the user can preview one or more audio files until the preferred audio file is determined, which can be beneficial to improving the user experience.
As an optional implementation, before recording the user audio, the method further includes: generating at least one recording sub-mode, and determining a target sub-mode from the at least one recording sub-mode according to a first selection operation; displaying lyric information under a target sub-mode; the lyric information under the target sub-mode comprises highlighted recording parts, and at least one recording sub-mode corresponds to different recording parts; and if the second selection operation aiming at the target sub-mode is detected, executing recording of the user audio.
Specifically, the display manner of the highlighted recording part may include: highlight display, font bold display, changed font background color display and the like, and the embodiment of the application is not limited. In addition, the target sub-mode may be a "first singing" mode or a "last singing" mode. The second selection operation for the target sub-mode may be used to represent a confirmation for the target sub-mode.
Referring to fig. 6, fig. 6 is a schematic interface diagram for showing a recording sub-mode according to an example embodiment of the present application. As shown in fig. 6, user interface 600 may include: song information presentation area 610, lyrics information presentation area 620, singing control 630, recording sub-mode control 640, recording sub-mode control 650, current user 660, target chorus object 670, and feedback prompt 671.
Specifically, after detecting the third interactive operation for selecting the target chorus object from the chorus objects to be selected, the user interface 600 may be output. Further, when a user operation for triggering the recording sub-mode control 640 expressed as "i sing first" is detected, lyric information in the "i sing first" sub-mode may be displayed in the lyric information display area 620; the lyric information comprises a highlighted part and a non-highlighted part, wherein the non-highlighted part is a singing part corresponding to the target chorus object. In fig. 6, the display mode of the highlighted portion is a font bold mode, but this is only an exemplary mode, and in the actual application process, other highlighting modes may also be applied.
In fig. 6, the target chorus object 670 is a friend object, and thus, a feedback prompt 671 corresponding to the target chorus object 670 may also be shown in fig. 6. The feedback prompt 671 includes an interactable portion a described below, which is represented in fig. 6 as "remind him" by way of bolding. The user can send the generated chorus message to the client of the friend object by clicking 'remind him' so as to enable the client interface of the friend object to display the chorus message; where the chorus message may include a text prompt message (e.g., you sing 'xxxxxx' real bar, i come to chorus) and/or a chorus link. When a friend who receives the chorus message clicks the chorus link, the friend can jump to a chorus file playing page, play the chorus file according to the detected playing operation, and synchronously display the lyric information while playing the chorus file.
Further, the step of recording the user audio may be performed when a second selection operation acting on singing control 630 for the "i'm sing first" sub-mode (i.e., the target sub-mode described above) is detected.
Referring to fig. 7, fig. 7 is a schematic interface diagram for showing another recording sub-mode according to an example embodiment of the present application. As shown in fig. 7, user interface 700 may include: song information presentation area 710, lyrics information presentation area 720, singing control 730, recording sub-mode control 740, recording sub-mode control 750, current user 760, target chorus object 770, and feedback prompt 771.
The song information display area 710, the singing control 730, the recording sub-mode control 740, the recording sub-mode control 750, the current user 760, the target chorus object 770 and the feedback prompt 771 correspond to the song information display area 610, the singing control 630, the recording sub-mode control 640, the recording sub-mode control 650, the current user 660, the target chorus object 670 and the feedback prompt 671 in sequence, so that the limitation on each element in fig. 7 is not repeated.
Corresponding to fig. 6, when a user operation for triggering the recording sub-mode control 640 denoted as "sing after me" is detected, lyric information in the "sing after me" sub-mode may be displayed in the lyric information display area 620; the lyric information comprises a highlighted part and a non-highlighted part, wherein the non-highlighted part is a singing part corresponding to the target chorus object. In fig. 6, the display mode of the highlighted portion is a font bold mode, but this is only an exemplary mode, and in the actual application process, other highlighting modes may also be applied.
Further, when a second selection operation acting on the singing control 730 for the "i'm singing" sub-mode (i.e., the target sub-mode described above) is detected, the step of recording the user audio may be performed.
It should be noted that, for the lyric information, the non-highlighted portion shown in fig. 6 is the highlighted portion shown in fig. 7, and the highlighted portion shown in fig. 6 is the non-highlighted portion shown in fig. 7.
Therefore, by implementing the optional implementation mode, various optional sub-modes can be provided for the user, and the user can select a preferred sub-mode to sing according to the requirement, so that a chorus file conforming to the requirement can be obtained, the use experience of the user can be improved, and the use viscosity of the user can be improved.
As an optional implementation manner, after detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected, the method further includes: if the target chorus object is a friend object, outputting a feedback prompt for indicating that chorus information is fed back to the friend object; and if the target chorus object is a hot song-turning object or an original song object, outputting an association prompt for prompting the user to establish an association relationship with the hot song-turning object or the original song object.
Specifically, the feedback prompt may include an interactive part a, and after outputting the feedback prompt for indicating that the chorus message is fed back to the friend object, the method may further include: and generating a chorus message according to user information (such as head portrait identification and/or nickname) corresponding to the friend object, and sending the chorus message to a client of the friend object when detecting that the user operation of the interactive part A is triggered, so that the client interface of the friend object displays the chorus message.
In addition, optionally, the association prompt includes an interactive part B, and after the association prompt for prompting the user to establish the association relationship with the hot-tuning object or the original object is output, the method may further include: and if the user operation triggering the interactive part B is detected, binding the user ID with the hot-flip object ID or the original singing object ID, thereby realizing the establishment of the association relationship.
Referring to fig. 8, fig. 8 is a schematic diagram of an interface receiving a feedback prompt according to an example embodiment of the present application. As shown in fig. 8, the user interface 800 may be a client interface of a friend object, and specifically may include: current user identification 810, chorus message 820, message input area 830, and messaging control 840. Where the message input area 830 may display a prompt message "please enter message … therein" before detecting the user's entered message, the buddy may enter a reply message in the message input area 830 according to the prompt message and trigger the message send control 840 to send the reply message to implement a reply to the chorus message 820.
Specifically, upon detecting a user operation triggering the interactable part a, the chorus message 820 "your sung" xxxxxxxx, "my come chorus song" described above is sent to the client of the friend object, so that the user interface 800 displays the chorus message 820 and the current user identification 810. Further, the buddy object may input a reply message through the message input area 830 and send out a message through the message sending control 840 to enable a dialog with the current user.
Referring to fig. 9, fig. 9 is a schematic interface diagram illustrating a case where a target chorus object is a hot-flip object or an original chorus object according to an exemplary embodiment of the present application. As shown in fig. 9, user interface 900 may include: song information display area 910, lyrics information display area 920, singing control 930, recording sub-mode control 940, recording sub-mode control 950, current user 960, target chorus object 970, and associated prompts 971.
Specifically, after detecting the third interactive operation for selecting the target chorus object from the chorus objects to be selected, the user interface 900 may be output. Further, when a user operation for triggering the recording sub-mode control 950 denoted as "sing after me" is detected, lyric information in the "sing after me" sub-mode may be displayed in the lyric information display area 620.
In fig. 9, the target chorus object 970 is a hot-flip object or an original object, and therefore, an association prompt 971 for prompting the user to establish an association relationship with the hot-flip object or the original object may also be shown in fig. 9. The above-mentioned interactable portion B is included in the association hint 971, and the interactable portion B denoted "+ care" in fig. 9 is shown by way of font bolding. The user can bind the user ID with the hot-flip object ID or the original singing object ID by clicking the + concern to realize the concern to the hot-flip object or the original singing object.
Further, when a second selection operation acting on the singing control 930 for the "i'm's singing" sub-mode (i.e., the target sub-mode described above) is detected, the step of recording the user's audio may be performed.
Therefore, by implementing the optional implementation mode, the interactivity is improved, the contact between users is enhanced and the use viscosity of the users is favorably improved by prompting the chorus object or establishing the incidence relation between the chorus object and the chorus object.
Referring to fig. 10, fig. 10 is a schematic flowchart illustrating a chorus file generation method according to another exemplary embodiment of the present application. As shown in fig. 10, the chorus file generating method includes: step S1000 to step S1024.
Step S1000: when detecting a first interactive operation triggering the starting of an audio recording function, displaying a chorus control, an interactive control used for representing a segment solo recording mode and an interactive control used for representing a full-segment solo recording mode, and displaying lyric information corresponding to a target accompaniment;
if the second interactive operation acting on the chorus control is detected and one or more uploaded files corresponding to the target accompaniment exist, executing step S1002; if the second interactive operation acting on the chorus control is detected and it is detected that one or more uploaded files do not exist, step S1004 is executed.
Step S1002: determining one or more release objects corresponding to one or more uploaded files and keeping lyric information corresponding to a display target accompaniment; wherein the one or more uploaded files comprise audio files, and the one or more published objects comprise target chorus objects;
if the number of the issued objects is less than or equal to the preset threshold, executing step S1006; if the number of the published objects is greater than the preset threshold and it is detected that at least one associated object exists in the published objects, step S1008 is performed.
Step S1004: the method comprises the steps of playing a target accompaniment, synchronously recording user audio, keeping and displaying lyric information corresponding to the target accompaniment, synthesizing the user audio and the target accompaniment into a file to be chorus, and uploading the file to be chorus serving as an uploaded file corresponding to the target accompaniment to a server. And then the flow is ended.
Step S1006: and determining the published object as a chorus object to be selected. Further, step S1014, step S1016, or step S1018 is executed.
Step S1008: grouping the published objects to obtain a first class object group containing at least one associated object and a second class object group containing other published objects; the related objects and the current user have an association relationship, and the other published objects are published objects except at least one related object in the published objects.
Step S1010: sequencing the first class object group and the second class object group according to the sequence from late to early of the uploading time to obtain a sequencing result comprising a first sequencing result and a second sequencing result; the first sorting result comprises at least one associated object, the second sorting result comprises other issued objects, and the sequence bit of any associated object in the first sorting result has priority over any other issued object in the second sorting result.
Step S1012: and selecting a preset number of published objects from the sequencing result as chorus objects to be selected. Further, step S1014, step S1016, or step S1018 is executed.
If a third interactive operation for selecting the target chorus object from the chorus objects to be selected is detected, and the third interactive operation acts on the target chorus object, executing step S1014; if a third interactive operation for selecting the target chorus object from the chorus objects to be selected is detected, and the third interactive operation acts on the random selection control, executing step S1016; if the third interactive operation for selecting the target chorus object from the chorus objects to be selected is not detected, step S1018 is executed.
Step S1014: if the target chorus object is a friend object, outputting a feedback prompt for indicating that chorus information is fed back to the friend object; and if the target chorus object is a hot song-turning object or an original song object, outputting an association prompt for prompting the user to establish an association relationship with the hot song-turning object or the original song object. Further, step S1020 is performed.
Step S1016: and randomly selecting a chorus object from chorus objects to be selected as a target chorus object. Then, step S1014 is executed.
Step S1018: and displaying the chorus recording control part of the file to be chorus, playing the target accompaniment and synchronously recording the user audio when detecting the chorus recording operation acting on the chorus recording control part of the file to be chorus, and synthesizing the user audio and the target accompaniment to obtain the file to be chorus. Then, the flow ends.
Step S1020: generating at least one recording sub-mode, determining a target sub-mode from the at least one recording sub-mode according to a first selection operation, and displaying lyric information under the target sub-mode; wherein the lyric information in the target sub-mode comprises highlighted recording portions, and at least one recording sub-mode corresponds to a different recording portion.
Step S1022: the audio file is played in preview mode.
Step S1024: and if the confirmation operation aiming at the audio file and the second selection operation aiming at the target sub-mode are detected, recording the user audio and synthesizing the audio file corresponding to the target chorus object with the user audio to obtain the chorus file.
It should be noted that steps S1000 to S1024 correspond to the steps and the embodiment shown in fig. 1, and for the specific implementation of steps S1000 to S1024, please refer to the steps and the embodiment shown in fig. 1, which will not be described again.
Therefore, by implementing the method shown in fig. 10, the user operation can be simplified, and the chorus triggering difficulty can be reduced, so that convenient and fast chorus triggering can be realized; the method is beneficial to improving the interactivity, timely grasps and effectively utilizes the opportunity of creating (singing) enthusiasm of the users, and can strengthen the social relationship among the users on the basis, improve the interactive frequency among the users and further improve the use viscosity of the users; and, the use frequency of chorus function can be improved; and the function of generating the files to be chorus can be provided, and richer chorus mode selections are provided for the user.
Moreover, although the steps of the methods herein are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Exemplary System
The application also discloses a chorus file generating system which can comprise a user selection module, a lyric module, a mode selection module, a button module, a chorus user information display module and a chorus selection module.
The user selection module is used for providing a chorus object selection function for a user, and at least comprises a chorus object display area to be selected, a random selection control and a chorus file recording control.
The lyric module is used for displaying lyric information corresponding to the target accompaniment, and the lyric module at least comprises a lyric information display area. In addition, the lyric module is also used for controlling the lyrics to slide and display according to the sliding speed corresponding to the sliding operation when the sliding operation is detected. In addition, the lyric module is also used for displaying the lyric information under the target recording sub-mode.
The mode selection module is used for providing a plurality of singing modes for a user, and the mode selection module at least comprises an interactive control used for representing a segment solo recording mode, an interactive control used for representing a whole segment solo recording mode and a chorus control.
The button module is used to provide confirmation functionality, and may include at least a confirmation control.
And the chorus user information display module is used for displaying the feedback prompt, the head portrait identifier of the current user and the head portrait identifier of the target chorus object.
The sequential singing selection module is used for providing a function of selecting a recording sub-mode, and specifically comprises a recording sub-mode control expressed as 'i sing first' and a recording sub-mode control expressed as 'i sing later'. The button module is further configured to detect a second selection operation for being in the singing control.
By implementing the system, the user operation can be simplified, the chorus triggering difficulty is reduced, and the convenient and quick chorus triggering can be realized; the method is beneficial to improving the interactivity, timely grasps and effectively utilizes the opportunity of creating (singing) enthusiasm of the users, and can strengthen the social relationship among the users on the basis, improve the interactive frequency among the users and further improve the use viscosity of the users; and, the use frequency of chorus function can be improved; and the function of generating the files to be chorus can be provided, and richer chorus mode selections are provided for the user.
Exemplary Medium
Having described the method of the exemplary embodiments of the present application, the media of the exemplary embodiments of the present application will be described next.
In some possible embodiments, various aspects of the present application may also be implemented as a medium having program code stored thereon, which when executed by a processor of a device, is used to implement the steps in the chorus file generation method according to various exemplary embodiments of the present application described in the above-mentioned "exemplary methods" section of this specification.
Specifically, the processor of the device, when executing the program code, is configured to implement the following steps: when a first interactive operation for triggering the audio recording function to start is detected, displaying a plurality of interactive controls respectively used for representing different recording modes; wherein the plurality of interactive controls comprise chorus controls; if a second interactive operation acting on the chorus control is detected, determining a chorus object to be selected corresponding to the second interactive operation and displaying the chorus object to be selected; and if detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected, recording the user audio and synthesizing an audio file corresponding to the target chorus object with the user audio to obtain a chorus file.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: when the first interactive operation is detected, displaying lyric information corresponding to the target accompaniment; and keeping displaying lyric information corresponding to the target accompaniment when the second interactive operation or the third interactive operation is detected.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: if one or more uploaded files corresponding to the target accompaniment exist, determining one or more release objects corresponding to the one or more uploaded files; wherein the one or more uploaded files comprise audio files, and the one or more published objects comprise target chorus objects; determining a chorus object to be selected from one or more published objects; if detecting that one or more uploaded files do not exist, playing a target accompaniment and synchronously recording user audio; and synthesizing the user audio and the target accompaniment into a file to be chorus, and uploading the file to be chorus as an uploaded file corresponding to the target accompaniment to a server.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: displaying a record control of the file to be chorus; and when detecting the chorus recording operation acting on the to-be-chorus file recording control and not detecting the third interactive operation, playing the target accompaniment and synchronously recording the user audio, and synthesizing the user audio and the target accompaniment to obtain the to-be-chorus file.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: if the number of the issued objects is less than or equal to a preset threshold value, determining the issued objects as chorus objects to be selected; and if the number of the published objects is larger than a preset threshold value, sequencing the published objects according to the uploading time from late to early, and selecting the published objects with the preset number from the sequencing result as the chorus objects to be selected.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: grouping the published objects to obtain a first class object group containing at least one associated object and a second class object group containing other published objects; the other release objects are release objects except at least one association object in the release objects; sequencing the first class object group and the second class object group according to the sequence from late to early of the uploading time to obtain a sequencing result comprising a first sequencing result and a second sequencing result; the first sorting result comprises at least one associated object, the second sorting result comprises other issued objects, and the sequence bit of any associated object in the first sorting result has priority over any other issued object in the second sorting result.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: if the target chorus object is a friend object, outputting a feedback prompt for indicating that chorus information is fed back to the friend object; and if the target chorus object is a hot song-turning object or an original song object, outputting an association prompt for prompting the user to establish an association relationship with the hot song-turning object or the original song object.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: determining the object type of the chorus object to be selected, and generating description information corresponding to the chorus object to be selected according to the object type; the object types comprise friend objects, popular song-turning objects and original song objects; and displaying the chorus object to be selected and the description information.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: playing the audio file in a preview mode; and if the confirmation operation aiming at the audio file is detected, executing recording the audio of the user.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: generating at least one recording sub-mode, and determining a target sub-mode from the at least one recording sub-mode according to a first selection operation; displaying lyric information under a target sub-mode; the lyric information under the target sub-mode comprises highlighted recording parts, and at least one recording sub-mode corresponds to different recording parts; and if the second selection operation aiming at the target sub-mode is detected, executing recording of the user audio.
In some embodiments of the present application, the program code is further configured to, when executed by the processor of the device, perform the following steps: and randomly selecting a chorus object from chorus objects to be selected as a target chorus object.
It should be noted that: the above-mentioned medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
Exemplary devices
Having described the media of the exemplary embodiment of the present application, next, a chorus file generating apparatus of the exemplary embodiment of the present application will be described with reference to fig. 11.
Referring to fig. 11, fig. 11 is a block diagram illustrating a structure of a chorus file generating apparatus according to an exemplary embodiment of the present application. As shown in fig. 11, the chorus file generating apparatus 1100 according to an exemplary embodiment of the present application includes: recording mode display unit 1101, chorus object display unit 1102 to be selected, audio recording unit 1103 and chorus file synthesis unit 1104, wherein:
a recording mode display unit 1101, configured to display, when a first interactive operation that triggers the start of an audio recording function is detected, a plurality of interactive controls that are respectively used for representing different recording modes; wherein the plurality of interactive controls comprise chorus controls;
the to-be-selected chorus object display unit 1102 is configured to determine a to-be-selected chorus object corresponding to a second interactive operation and display the to-be-selected chorus object when the second interactive operation acting on the chorus control is detected;
an audio recording unit 1103, configured to record a user audio when detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected;
and a chorus file synthesizing unit 1104, configured to synthesize the audio file corresponding to the target chorus object with the user audio to obtain a chorus file.
In addition, the system also comprises an information display unit (not shown) for displaying the lyric information corresponding to the target accompaniment when the first interactive operation is detected; and keeping displaying lyric information corresponding to the target accompaniment when the second interactive operation or the third interactive operation is detected.
Therefore, by implementing the chorus file generating device 1100 shown in fig. 11, on one hand, the user operation can be simplified, and the chorus triggering difficulty can be reduced, so that convenient and fast chorus triggering can be realized; on the other hand, the method is favorable for improving the interactivity, timely grasps the opportunity for creating (singing) the passion of the user and effectively utilizes the opportunity, and on the basis, the social relationship among the users can be strengthened, the interaction frequency among the users is improved, and therefore the use viscosity of the users is improved.
As an optional implementation, the plurality of interactive controls further includes: an interactive control for representing a segment solo recording mode and an interactive control for representing a full segment solo recording mode.
Therefore, by implementing the optional implementation mode, multiple singing modes can be provided for the user, and the improvement of interactivity is facilitated. In addition, the chorus mode can be integrated in the commonly-arranged solo recording mode, so that the user can be conveniently and timely satisfied when the chorus requirement exists, and the use viscosity of the user is favorably improved. Compared with the prior art, the method has the advantages that the user does not need to enter the chorus mode through multiple clicks (such as clicking 'friend homepage', 'friend singing work', 'chorus with friend singing work'), and therefore the use frequency of the chorus function is improved.
As an alternative embodiment, wherein:
an object determining unit, configured to determine, after the to-be-selected chorus object displaying unit 1102 detects the second interactive operation acting on the chorus control, one or more release objects corresponding to one or more uploaded files if it is detected that one or more uploaded files corresponding to the target accompaniment exist; wherein the one or more uploaded files comprise audio files, and the one or more published objects comprise target chorus objects;
and, the chorus object to be selected displaying unit 1102 determines the chorus object to be selected corresponding to the second interactive operation, including: determining a chorus object to be selected from one or more published objects;
the audio recording unit 1103 is further configured to play the target accompaniment and synchronously record the user audio when the object determining unit detects that one or more uploaded files do not exist; and synthesizing the user audio and the target accompaniment into a file to be chorus, and uploading the file to be chorus as an uploaded file corresponding to the target accompaniment to a server.
Therefore, the implementation of the optional implementation mode can provide two chorus modes, on one hand, a user can record chorus based on files to be chorus (namely the uploaded files) uploaded by other users; on the other hand, the user can also record the files to be chorus as uploaded files to upload the server so as to provide the files to be chorus for other users, thus improving the interaction diversity, improving the use experience of the user and being beneficial to improving the use viscosity of the user.
As an optional implementation, the apparatus 1100 further includes:
a control display unit (not shown) for displaying the control for recording the chorus file;
the audio recording unit 1103 is further configured to play the target accompaniment and record the user audio synchronously when detecting the chorus recording operation acting on the to-be-chorus file recording control and detecting no third interactive operation, and synthesize the user audio and the target accompaniment to obtain the to-be-chorus file.
Therefore, the optional implementation mode can provide the function of generating the files to be chorus, provide richer chorus mode selections for the user and be beneficial to improving the use experience of the user.
As an optional implementation manner, the to-be-selected chorus object presenting unit 1102 determines a to-be-selected chorus object from one or more published objects, including:
if the number of the published objects is less than or equal to the preset threshold, the chorus object display unit 1102 determines the published objects as chorus objects to be selected;
if the number of the published objects is greater than the preset threshold, the to-be-selected chorus object display unit 1102 sorts the published objects according to the order from late to early of the uploading time, and selects the published objects of the preset number from the sorting result as the to-be-selected chorus objects.
Therefore, the optional implementation mode can be implemented to display the chorus object to be selected to the current user according to the uploading time, and if the current user can interact with the chorus object to be selected, the application viscosity of the chorus object to be selected is favorably improved.
As an optional implementation manner, if it is detected that at least one associated object exists in the published objects, the to-be-selected chorus object presenting unit 1102 sorts the published objects according to the order from late to early of the upload time, including:
the to-be-selected chorus object display unit 1102 groups the published objects to obtain a first class object group containing at least one associated object and a second class object group containing other published objects; the other release objects are release objects except at least one association object in the release objects;
the chorus object display unit 1102 sorts the first class object group and the second class object group according to the order from late to early of the uploading time to obtain a sorting result including a first sorting result and a second sorting result; the first sorting result comprises at least one associated object, the second sorting result comprises other issued objects, and the sequence bit of any associated object in the first sorting result has priority over any other issued object in the second sorting result.
The associated objects comprise friend objects, and other release objects comprise popular song-turning objects and/or original song objects; the ranking priority of the hot song object is lower than that of the original song object.
Therefore, by implementing the optional implementation mode, different types of published objects can be sorted according to the priority, so that the published object with the front positive integer number is selected from the sorting result as the chorus object to be selected, and the chorus will and the interaction will of the user can be promoted.
As an optional implementation, the apparatus 1100 further includes:
a prompt output unit (not shown) configured to, after the audio recording unit 1103 detects a third interactive operation for selecting a target chorus object from the chorus objects to be selected, output a feedback prompt for indicating that a chorus message is fed back to the friend object if the target chorus object is the friend object;
and the prompt output unit is also used for outputting an association prompt for prompting the user to establish an association relationship with the hot singing object or the original singing object when the target choring object is the hot singing object or the original singing object.
Therefore, by implementing the optional implementation mode, the interactivity is improved, the contact between users is enhanced and the use viscosity of the users is favorably improved by prompting the chorus object or establishing the incidence relation between the chorus object and the chorus object.
As an optional implementation manner, the to-be-selected chorus object displaying unit 1102 displays the to-be-selected chorus object, including:
the to-be-selected chorus object display unit 1102 determines an object type of the to-be-selected chorus object, and generates description information corresponding to the to-be-selected chorus object according to the object type; the object types comprise friend objects, popular song-turning objects and original song objects;
and displaying the chorus object to be selected and the description information.
Therefore, by implementing the optional implementation mode, the corresponding description information can be generated in a personalized manner according to the object type, and the chorus will of the user can be promoted, so that the chorus purpose of the user in the singing function can be realized.
As an optional implementation, the apparatus 1100 further includes:
a preview mode playing unit (not shown) for playing the audio file in preview mode before the audio recording unit 1103 records the user audio;
the audio recording unit 1103 is specifically configured to record the user audio when a confirmation operation for the audio file is detected.
Therefore, the implementation of the alternative embodiment can provide a preview function for the user, and the user can preview one or more audio files until the preferred audio file is determined, which can be beneficial to improving the user experience.
As an optional implementation, the apparatus 1100 further includes:
a recording mode generating unit (not shown) configured to generate at least one recording sub-mode before the audio recording unit 1103 records the user audio, and determine a target sub-mode from the at least one recording sub-mode according to a first selection operation;
a lyric information display unit (not shown) for displaying the lyric information in a target sub-mode; the lyric information under the target sub-mode comprises highlighted recording parts, and at least one recording sub-mode corresponds to different recording parts;
the audio recording unit 1103 is specifically configured to record the user audio when the confirmation operation for the audio file is detected and when the second selection operation for the target sub-mode is detected.
Therefore, by implementing the optional implementation mode, various optional sub-modes can be provided for the user, and the user can select a preferred sub-mode to sing according to the requirement, so that a chorus file conforming to the requirement can be obtained, the use experience of the user can be improved, and the use viscosity of the user can be improved.
As an alternative implementation, the third interactive operation is performed on the target chorus object or on the random selection control, and if the third interactive operation is performed on the random selection control, the apparatus 1100 further includes:
a random selection unit (not shown) for randomly selecting one chorus object as a target chorus object from the chorus objects to be selected before the audio recording unit 1103 records the user audio.
Therefore, the optional implementation mode can provide the function of randomly selecting the chorus object for the user, and the problem that the user is difficult to decide is solved.
It should be noted that although in the above detailed description several modules or units of the chorus file generating apparatus are mentioned, this division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Exemplary electronic device
Having described the method, medium, and apparatus of the exemplary embodiments of the present application, an electronic device according to another exemplary embodiment of the present application is next described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
A chorus file generating apparatus 1200 according to yet another alternative example embodiment of the present application is described below with reference to fig. 12. The chorus file generating apparatus 1200 shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 12, the chorus file generating apparatus 1200 is represented in the form of an electronic device. The components of the chorus file generating apparatus 1200 may include, but are not limited to: the at least one processing unit 1210, the at least one memory unit 1220, and a bus 1230 connecting the various system components including the memory unit 1220 and the processing unit 1210.
Wherein the storage unit stores program code that can be executed by the processing unit 1210 such that the processing unit 1210 performs the steps according to various exemplary embodiments of the present application described in the description part of the above exemplary methods of the present specification. For example, the processing unit 1210 may perform various steps as shown in fig. 1 and 10.
The storage unit 1220 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM)12201 and/or a cache memory unit 12202, and may further include a read only memory unit (ROM) 12203.
Storage unit 1220 may also include a program/utility 12204 having a set (at least one) of program modules 12205, such program modules 12205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1230 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The chorus file generating apparatus 1200 may also communicate with one or more external devices 1300 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the chorus file generating apparatus 1200, and/or with any device (e.g., router, modem, etc.) that enables the chorus file generating apparatus 1200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 1250. Also, the chorus file generating apparatus 1200 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through the network adapter 1260. As shown in fig. 12, the network adapter 1260 communicates with the other modules of the chorus file generating apparatus 1200 through the bus 1230. It should be appreciated that, although not shown in the figures, other hardware and/or software modules may be used in conjunction with the chorus file generation apparatus 1200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present application.
While the spirit and principles of the application have been described with reference to several particular embodiments, it is to be understood that the application is not limited to the specific embodiments disclosed, nor is the division of aspects, which is for convenience only as the features in such aspects cannot be combined to advantage. The application is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A chorus file generating method is characterized by comprising the following steps:
when a first interactive operation for triggering the audio recording function to start is detected, displaying a plurality of interactive controls respectively used for representing different recording modes; wherein the plurality of interactive controls include chorus controls;
if a second interactive operation acting on the chorus control is detected, determining a chorus object to be selected corresponding to the second interactive operation and displaying the chorus object to be selected;
and if detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected, recording user audio and synthesizing an audio file corresponding to the target chorus object with the user audio to obtain a chorus file.
2. The method of claim 1, wherein the plurality of interactive controls further comprises: an interactive control for representing a segment solo recording mode and an interactive control for representing a full segment solo recording mode.
3. The method of claim 1, further comprising:
when the first interactive operation is detected, displaying lyric information corresponding to a target accompaniment;
and keeping displaying the lyric information corresponding to the target accompaniment when the second interactive operation or the third interactive operation is detected.
4. The method of claim 3, wherein after detecting the second interactive operation acting on the chorus control, the method further comprises:
if one or more uploaded files corresponding to the target accompaniment exist, determining one or more release objects corresponding to the one or more uploaded files; wherein the one or more uploaded files include the audio file, and the one or more published objects include the target chorus object;
and determining the chorus object to be selected corresponding to the second interactive operation, wherein the determining comprises the following steps: determining the chorus object to be selected from the one or more published objects;
if the one or more uploaded files are detected to be absent, playing the target accompaniment and synchronously recording the user audio; and synthesizing the user audio and the target accompaniment into a file to be chorus, and uploading the file to be chorus as an uploaded file corresponding to the target accompaniment to a server.
5. The method of claim 4, further comprising:
displaying a record control of the file to be chorus;
and when detecting the chorus recording operation acting on the file recording control to be chorus and not detecting the third interactive operation, playing the target accompaniment and synchronously recording the user audio, and synthesizing the user audio and the target accompaniment to obtain the file to be chorus.
6. The method of claim 4, wherein determining the chorus object to be selected from the one or more published objects comprises:
if the number of the issued objects is less than or equal to a preset threshold value, determining the issued objects as chorus objects to be selected;
and if the number of the published objects is larger than the preset threshold value, sequencing the published objects according to the uploading time from late to early, and selecting the published objects with the preset number from the sequencing result as the chorus objects to be selected.
7. The method of claim 6, wherein if it is detected that at least one associated object exists in the published objects, sorting the published objects according to an order from late to early of an upload time comprises:
grouping the release objects to obtain a class-one object group containing the at least one associated object and a class-two object group containing other release objects; the association object has an association relation with the current user, and the other release objects are release objects except the at least one association object in the release objects;
sequencing the first class object group and the second class object group according to the sequence from late to early of the uploading time to obtain a sequencing result comprising a first sequencing result and a second sequencing result; wherein the first ordering result includes the at least one associated object, the second ordering result includes the other published objects, and the ordinal of any associated object in the first ordering result takes precedence over any other published object in the second ordering result.
8. A chorus file generating apparatus, comprising:
the recording mode display unit is used for displaying a plurality of interactive controls which are respectively used for representing different recording modes when a first interactive operation which triggers the starting of the audio recording function is detected; wherein the plurality of interactive controls include chorus controls;
the chorus object display unit to be selected is used for determining a chorus object to be selected corresponding to the second interactive operation and displaying the chorus object to be selected when the second interactive operation acting on the chorus control is detected;
the audio recording unit is used for recording the user audio when detecting a third interactive operation for selecting a target chorus object from the chorus objects to be selected;
and the chorus file synthesis unit is used for synthesizing the audio file corresponding to the target chorus object with the user audio to obtain a chorus file.
9. An electronic device, comprising:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement a chorus file generation method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements a chorus file generation method as claimed in any one of claims 1 to 7.
CN202011053091.7A 2020-09-29 2020-09-29 Chorus file generation method, apparatus, device and computer readable storage medium Active CN112130727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011053091.7A CN112130727B (en) 2020-09-29 2020-09-29 Chorus file generation method, apparatus, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011053091.7A CN112130727B (en) 2020-09-29 2020-09-29 Chorus file generation method, apparatus, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112130727A true CN112130727A (en) 2020-12-25
CN112130727B CN112130727B (en) 2022-02-01

Family

ID=73844863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011053091.7A Active CN112130727B (en) 2020-09-29 2020-09-29 Chorus file generation method, apparatus, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112130727B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031903A (en) * 2021-03-23 2021-06-25 青岛海信移动通信技术股份有限公司 Electronic equipment and audio stream synthesis method thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1433548A (en) * 1999-12-20 2003-07-30 汉索尔索弗特有限公司 Network based music playing/song accompanying service system and method
CN101105936A (en) * 2006-07-10 2008-01-16 安琦国际贸易有限公司 Method for searching and display music score by using search device
TW200923675A (en) * 2007-11-19 2009-06-01 Inventec Besta Co Ltd Network chorusing system and method thereof
CN103377265A (en) * 2012-04-16 2013-10-30 爱卡拉互动媒体股份有限公司 Characteristic song requesting system and operation method
CN108630240A (en) * 2017-03-23 2018-10-09 北京小唱科技有限公司 A kind of chorus method and device
CN109300459A (en) * 2018-09-07 2019-02-01 传线网络科技(上海)有限公司 Song chorus method and device
CN110418182A (en) * 2019-07-19 2019-11-05 福建星网视易信息***有限公司 Chorus method of networking and computer readable storage medium
CN111524494A (en) * 2020-04-27 2020-08-11 腾讯音乐娱乐科技(深圳)有限公司 Remote real-time chorus method and device and storage medium
CN111583972A (en) * 2020-05-28 2020-08-25 北京达佳互联信息技术有限公司 Singing work generation method and device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1433548A (en) * 1999-12-20 2003-07-30 汉索尔索弗特有限公司 Network based music playing/song accompanying service system and method
CN101105936A (en) * 2006-07-10 2008-01-16 安琦国际贸易有限公司 Method for searching and display music score by using search device
TW200923675A (en) * 2007-11-19 2009-06-01 Inventec Besta Co Ltd Network chorusing system and method thereof
CN103377265A (en) * 2012-04-16 2013-10-30 爱卡拉互动媒体股份有限公司 Characteristic song requesting system and operation method
CN108630240A (en) * 2017-03-23 2018-10-09 北京小唱科技有限公司 A kind of chorus method and device
CN109300459A (en) * 2018-09-07 2019-02-01 传线网络科技(上海)有限公司 Song chorus method and device
CN110418182A (en) * 2019-07-19 2019-11-05 福建星网视易信息***有限公司 Chorus method of networking and computer readable storage medium
CN111524494A (en) * 2020-04-27 2020-08-11 腾讯音乐娱乐科技(深圳)有限公司 Remote real-time chorus method and device and storage medium
CN111583972A (en) * 2020-05-28 2020-08-25 北京达佳互联信息技术有限公司 Singing work generation method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
答疑组鸢尾: "《全民K歌怎样发起和好友合唱?》", 《百度知道:HTTPS://ZHIDAO.BAIDU.COM/QUESTION/622984335695900492.HTML》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031903A (en) * 2021-03-23 2021-06-25 青岛海信移动通信技术股份有限公司 Electronic equipment and audio stream synthesis method thereof

Also Published As

Publication number Publication date
CN112130727B (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN107832434A (en) Method and apparatus based on interactive voice generation multimedia play list
CN108886523A (en) Interactive online music experience
US20100223314A1 (en) Apparatus and method for creating and transmitting unique dynamically personalized multimedia messages
CN109889880B (en) Information display method, device, equipment and storage medium for concerned user
CN111383669B (en) Multimedia file uploading method, device, equipment and computer readable storage medium
US20180293088A1 (en) Interactive comment interaction method and apparatus
US20030177113A1 (en) Information searching system
US9002885B2 (en) Media playback in a virtual environment
CN110109607B (en) Information processing method and device, electronic equipment and storage medium
CN113590870A (en) Recommendation method, recommendation device, storage medium and electronic equipment
WO2023134419A1 (en) Information interaction method and apparatus, and device and storage medium
CN112130727B (en) Chorus file generation method, apparatus, device and computer readable storage medium
JP5572581B2 (en) Singing information processing apparatus and singing information processing program
CN114143572A (en) Live broadcast interaction method and device, storage medium and electronic equipment
CN109885720B (en) Music on demand interaction method, medium, device and computing equipment
US20140122606A1 (en) Information processing device, information processing method, and program
CN112667333A (en) Singing list interface display control method and device, storage medium and electronic equipment
JP2013003685A (en) Information processing device, information processing method and program
CN110262716A (en) A kind of data manipulation method, device and computer readable storage medium
CN115599273A (en) Media content processing method, device, equipment, readable storage medium and product
CN115346503A (en) Song creation method, song creation apparatus, storage medium, and electronic device
CN115328364A (en) Information sharing method and device, storage medium and electronic equipment
CN110209870B (en) Music log generation method, device, medium and computing equipment
CN107678810A (en) A kind of multimedia file processing method, device and storage medium
CN113726641A (en) Online interaction method and device, storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant