TWI486949B - Music emotion classification method - Google Patents

Music emotion classification method Download PDF

Info

Publication number
TWI486949B
TWI486949B TW101148694A TW101148694A TWI486949B TW I486949 B TWI486949 B TW I486949B TW 101148694 A TW101148694 A TW 101148694A TW 101148694 A TW101148694 A TW 101148694A TW I486949 B TWI486949 B TW I486949B
Authority
TW
Taiwan
Prior art keywords
music
emotional
classification
parameters
new
Prior art date
Application number
TW101148694A
Other languages
Chinese (zh)
Other versions
TW201426730A (en
Original Assignee
Univ Southern Taiwan Sci & Tec
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Southern Taiwan Sci & Tec filed Critical Univ Southern Taiwan Sci & Tec
Priority to TW101148694A priority Critical patent/TWI486949B/en
Publication of TW201426730A publication Critical patent/TW201426730A/en
Application granted granted Critical
Publication of TWI486949B publication Critical patent/TWI486949B/en

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Description

音樂的情緒分類方法Musical emotional classification

本發明是有關於一種音樂分類方法,特別是指一種音樂的情緒分類方法。The invention relates to a music classification method, in particular to a music emotion classification method.

在資訊***的時代中,音樂的獲取相當容易,配合電玩遊戲的日益興盛,許多電玩遊戲後來都會針對其使用之音樂另外發行遊戲原聲帶,或甚至舉辦演奏會。為了符合遊戲場景內容,音樂通常都會有故事性,因此若能以情緒來分類音樂,並讓遊戲者可挑選符合自己心情的音樂,則該電玩遊戲將能更受遊戲者所喜愛。In the era of information explosion, the acquisition of music is quite easy. With the growing popularity of video games, many video games will later release game soundtracks for the music they use, or even hold concerts. In order to conform to the content of the game scene, music usually has a story, so if the music can be classified by emotion and the player can select music that suits his or her mood, the video game will be more popular with the player.

雖然目前已有業者開發出所謂能夠對音樂進行情緒分類的軟體系統或裝置,但這類軟體系統或裝置的分類方法都是以預先設定之制式規則進行音樂情緒分類,由於每個人對於音樂的感受都不盡相同,其分類結果經常不被使用者所接受,所以這種制式分類方式並無法適用每個人,適用性差,此外,對於含有語音之音樂的分析準確度也相當差。Although there are currently developers who have developed so-called software systems or devices that can classify music emotions, the classification methods of such software systems or devices are based on pre-set rules of music to classify music emotions, because everyone feels about music. They are not the same, and the classification results are often not accepted by users. Therefore, this classification method cannot be applied to everyone, and the applicability is poor. In addition, the analysis accuracy of music containing speech is also quite poor.

因此,本發明之目的,即在提供一種透過調性、音程、節奏與音色對音樂進行情緒分類的情緒分類方法。Accordingly, it is an object of the present invention to provide an emotional classification method for emotionally classifying music through tonality, intervals, rhythms, and timbres.

本發明之另一目的在於提供一種可供使用者根據自身感受進行音樂之情緒分類的情緒分類方法。Another object of the present invention is to provide an emotion classification method that allows a user to classify emotions of music according to his or her own feelings.

於是,本發明音樂的情緒分類方法,適用於透過軟體 及/或硬體的方式建構在一電子裝置中,而用以對音樂進行情緒分類,包含以下步驟:(A)對該電子裝置儲存之多首音樂樣本分別設定一對應的情緒形容詞參數;(B)對步驟(A)之每一首音樂樣本進行調性、節奏、音程與音色等音樂特徵分析,而對應每一首音樂樣本輸出多個音樂特徵參數;(C)以多類別支援向量機統計分析該等音樂樣本之情緒形容詞參數分別與對應之該等音樂特徵參數的關連性,而建構輸出一個多類別情緒預測模型;及(D)對該電子裝置後續儲存之每一首新音樂進行音樂特徵分析而對應輸出多個音樂特徵參數,並以該多類別情緒預測模型分析該新音樂之該等音樂特徵參數,而對該音音樂進行情緒分類,並輸出一對應該新音樂之情緒形容詞參數。Thus, the emotional classification method of the music of the present invention is applicable to the software And/or hardware-based construction in an electronic device for emotionally classifying music, comprising the steps of: (A) respectively setting a corresponding emotional adjective parameter for the plurality of music samples stored in the electronic device; B) performing music characteristics such as tonality, rhythm, interval and timbre for each music sample of step (A), and outputting a plurality of music feature parameters for each music sample; (C) multi-class support vector machine Statistically analyzing the relationship between the emotional adjective parameters of the music samples and the corresponding music feature parameters, and constructing a multi-category emotion prediction model; and (D) performing each new music stored in the electronic device for subsequent storage Music feature analysis correspondingly outputs a plurality of music feature parameters, and analyzes the music feature parameters of the new music by the multi-category emotion prediction model, and classifies the music by emotion, and outputs a pair of emotional adjectives that should be new music parameter.

本發明之功效:藉由以支援向量機對音樂之調性、音程、節奏與音色等音樂特徵參數,以及對該等音樂所設定之情緒形容詞參數進行統計分析的方式,可建構出一能夠對音樂進行情緒分類,且分類準確度很高的多類別情緒預測模型。The effect of the invention: by using the support vector machine to analyze the musical characteristic parameters such as tonality, interval, rhythm and timbre of the music, and the statistical analysis of the emotional adjective parameters set by the music, a construct can be constructed A multi-category emotion prediction model in which music is classified into emotions and has high classification accuracy.

有關本發明之前述及其他技術內容、特點與功效,在以下配合參考圖式之一個較佳實施例的詳細說明中,將可清楚的呈現。The above and other technical contents, features and advantages of the present invention will be apparent from the following detailed description of the preferred embodiments.

如圖1所示,本發明音樂的情緒分類方法的較佳實施例,適用於透過軟體及/或硬體的方式建構在一電子裝置中,而可透過調性、音程、節奏與音色的分析,來對該電子 裝置所儲存之音樂進行情緒分類,所述音樂可以是電玩遊戲音樂、歌曲音樂或樂器演奏音樂等。該情緒分類方法包含以下步驟:As shown in FIG. 1, a preferred embodiment of the emotion classification method of the music of the present invention is applicable to an electronic device through software and/or hardware, and can be analyzed through tonality, interval, rhythm and tone. Come to the electronics The music stored by the device is emotionally classified, and the music may be video game music, song music, or instrumental music. The sentiment classification method includes the following steps:

步驟(一)對音樂樣本設定情緒形容詞參數。先對多首已經被確定其情緒分類之音樂樣本分別設定一代表其情緒歸屬的情緒形容詞參數。Step (1) setting emotional adjective parameters for the music sample. First, a plurality of emotional adjective parameters representing their emotional affiliation are respectively set for a plurality of music samples whose emotion classification has been determined.

在本實施例中,該等音樂樣本依據其情緒分類,共分為五類,且每一種情緒分類會對應一情緒形容詞與一情緒形容詞參數,該等情緒形容詞分別為『平靜的』、『冰冷的』、『奇異的』、『幽默的』,及『熱情的』。但實施時,採用之情緒形容詞的類型不以上述類型為限。In this embodiment, the music samples are classified into five categories according to their emotion classification, and each of the emotion categories corresponds to an emotional adjective and an emotional adjective parameter, and the emotional adjectives are “quiet” and “cold” respectively. ", "singular", "humorous", and "enthusiasm". However, the type of emotional adjectives used is not limited to the above types.

步驟(二)對所有音樂樣本進行音樂特徵分析,以取得該等音樂樣本的音樂特徵參數。對每一首音樂樣本進行調性、音程、節奏與音色等四種音樂特徵分析,而對應輸出每一首音樂樣本的音樂特徵參數。Step (2) Performing music feature analysis on all music samples to obtain music feature parameters of the music samples. For each music sample, four musical features such as tonality, interval, rhythm and timbre are analyzed, and the music characteristic parameters of each music sample are output correspondingly.

其中,調性音樂特徵分析,是分析每一首音樂樣本中所含有之調性種類,及各種調性的分佔比例(%),例如C大調、D大調、E大調與B大調等,及C小調、D小調與B小調等調性的分佔比例,而統計分析出每一情緒形容詞參數所對應之所有音樂樣本的各種調性的平均分佔比例(%)。音程音樂特徵是分析每一首音樂樣本所採用之完全協和音程、不完全協和音程,及不協和音程的分佔比例,並統計分析出每一情緒形容詞參數所對應之所有音樂樣本的完全協和音程、不完全協和音程,及不協和音程之平均分佔 比例(%)。該節奏音樂特徵的分析是分析每一首音樂樣本的節奏(beat per minute,bpm),並統計分析出每一情緒形容詞參數所對應之所有音樂樣本的平均節奏速度。該音色音樂特徵的分析是分析每一首音樂樣本之聲音形狀,也就是分析其頻譜分佈。Among them, the analysis of tonal music characteristics is to analyze the tonal types contained in each music sample, and the proportion of various tonality (%), such as C major, D major, E major and B major. Tune, and the proportion of tonality in C minor, D minor and B minor, and statistically analyze the average proportion (%) of various tonality of all music samples corresponding to each emotional adjective parameter. The interval music feature is to analyze the complete consonant interval, incomplete consonant interval, and the proportion of non-coordinated intervals used in each music sample, and statistically analyze the complete consonant interval of all music samples corresponding to each emotional adjective parameter. Incomplete consonant interval, and the average share of non-coordinated intervals proportion(%). The analysis of the rhythm music feature is to analyze the beat per minute (bpm) of each music sample, and statistically analyze the average rhythm speed of all the music samples corresponding to each emotional adjective parameter. The analysis of the timbre music features is to analyze the sound shape of each music sample, that is, to analyze its spectral distribution.

在分析出每一首音樂樣本之該等音樂特徵後,會將該等音樂特徵進行數據編碼與正規化處理,以產生所需的音樂特徵參數。After analyzing the music features of each music sample, the music features are data encoded and normalized to produce desired music feature parameters.

在本實施例中,該等音樂樣本之調性音樂特徵、音程音樂特徵、節奏音樂特徵與音色音樂特徵是分別透過MATLAB程式軟體進行分析,但實施時,由於前述音樂特徵之分析方法眾多,因此不以本案採用之軟體為限。此外,由於數據編碼與正規化為現有統計分析常用之技術手段,且方式眾多,因此不再詳述。In this embodiment, the tonal music features, the interval music features, the rhythm music features, and the timbre music features of the music samples are respectively analyzed by the MATLAB program software, but when implemented, due to the numerous analysis methods of the aforementioned music features, Not limited to the software used in this case. In addition, since data coding and normalization are common technical means for existing statistical analysis, and there are many ways, they will not be described in detail.

步驟(三)以支援向量機(Support Vector Machine,SVM)建立多類別情緒預測模型。以支援向量機對步驟(一)每一音樂樣本所設定的情緒形容詞參數,及步驟(二)對該等音樂樣本分析所得之音樂特徵參數進行多類別情緒預測模型的建立。Step (3) establish a multi-category emotion prediction model with Support Vector Machine (SVM). The multi-category emotion prediction model is established by the support vector machine for the emotional adjective parameters set in each music sample in step (1) and the music characteristic parameters obtained in the step (2) of the music samples.

在本實施例中,根據每一音樂樣本的該等音樂特徵參數,及每一音樂樣本對應的情緒形容詞參數進行統計分析,而形成多類別分類問題,再將多類別分類問題分解為一系列一對一(One-against-one,OAO)的支援向量機模型,並透過高斯核心函數(Gaussian kernel)進行交叉驗證,並 根據所有支援向量機模型的判斷結果,而建立出一個多類別情緒預測模型。此時,該多類別情緒預測模型即可用以對該電子裝置另外儲存之其他音樂進行情緒分類。In this embodiment, statistical analysis is performed according to the music feature parameters of each music sample and the emotional adjective parameters corresponding to each music sample, thereby forming a multi-category classification problem, and then decomposing the multi-category classification problem into a series of ones. One-against-one (OAO) support vector machine model, cross-validated by Gaussian kernel function, and Based on the judgment results of all support vector machine models, a multi-category emotion prediction model is established. At this time, the multi-category emotion prediction model can be used to emotionally classify other music stored separately for the electronic device.

步驟(四)以該多類別情緒預測模型對該電子裝置儲存之其他音樂進行情緒分類。在根據前述音樂樣本建立該多類別情緒預測模型後,就可以該多類別情緒預測模型進行音樂的情緒分類處理。進行分類時,同樣是先對該等待分析之新音樂進行音樂特徵分析,以取得每一待分類之新音樂的調性、音程、節奏與音色等音樂特徵參數,然後再以該多類別情緒預測模型對每一待分類之新音樂的該等音樂特徵參數進行分析,而找出與每一待分類之新音樂對應的情緒形容詞參數,進而分析出每一新音樂所屬的情緒類別,並於完成情緒分析後,驅使該電子裝置回饋顯示出一對應於該被分析之新音樂的情緒形容詞參數的動畫影像,藉以提醒該電子裝置使用者。Step (4) emotionally classifying other music stored in the electronic device by the multi-category emotion prediction model. After the multi-category emotion prediction model is established according to the foregoing music sample, the multi-category emotion prediction model can perform the emotion classification processing of the music. When classifying, the music characteristics of the new music waiting for analysis are first analyzed to obtain musical characteristic parameters such as tonality, interval, rhythm and timbre of each new music to be classified, and then the multi-category emotion prediction is performed. The model analyzes the music feature parameters of each new music to be classified, and finds the emotional adjective parameters corresponding to each new music to be classified, and then analyzes the emotional category to which each new music belongs, and completes After the sentiment analysis, the electronic device is driven to feedback an animated image corresponding to the emotional adjective parameter of the analyzed new music, thereby reminding the electronic device user.

步驟(五)將分析後之新音樂依據其情緒形容詞參數進行分類儲存,建立一音樂情緒分類資料庫。Step (5) classifying and storing the analyzed new music according to the parameters of the emotional adjectives, and establishing a music emotion classification database.

本發明透過上述步驟對該等音樂樣本進行調性、音程、節奏與音色等音樂特徵參數分析,並配合該等情緒形容詞參數進行支援向量機統計分析,以建構出多類別情緒預測模型,此多類別情緒預測模型之情緒分類的準確度非常高。Through the above steps, the present invention analyzes the musical characteristic parameters such as tonality, interval, rhythm and timbre of the music samples, and performs statistical analysis of the support vector machine with the emotional adjective parameters to construct a multi-category emotion prediction model, which is more The accuracy of the emotional classification of the category sentiment prediction model is very high.

必須說明的是,前述音樂樣本之初步情緒分類,可以是使用建構有本方法之電子裝置之使用者依據其本身聆聽 感受所做之分類,所以步驟(三)所建立的多類別情緒預測模型的音樂情緒分類模式會貼近個人喜好,因此,本發明音樂的情緒分類方法可方便建構有本方法之電子裝置的使用者,依據其自身感受來建立專屬的多類別情緒預測模型,使音樂的分類結果更符合個人的需求。It must be noted that the preliminary emotional classification of the aforementioned music samples may be based on the user who uses the electronic device constructed with the method. Feel the classification, so the music emotion classification model of the multi-category emotion prediction model established in step (3) will be close to personal preference. Therefore, the emotional classification method of the music of the present invention can conveniently construct the user of the electronic device of the method. According to its own feelings, it establishes a unique multi-category emotion prediction model, which makes the classification result of music more in line with individual needs.

當然,實施時,該等音樂樣本的情緒分類也可以是透過問卷實驗分析所產生,此時,透過本方法所建構之多類別情緒預測模型的遊戲分類結果,則會較貼近一般大眾的普遍性認知,而可適用於遊戲軟體開發者使用。Of course, during the implementation, the emotional classification of the music samples can also be generated through the analysis of the questionnaire. At this time, the game classification results of the multi-category emotion prediction model constructed by the method will be closer to the generality of the general public. Cognitive, but applicable to game software developers.

因此,本發明音樂的情緒分類方法可方便根據使用者初始建立之音樂的情緒分類差異,來建立出符合各種需求之專屬多類別情緒預測模型,而可改善傳統音樂情緒分類系統僅能依據其原先設定之制式分析模式進行音樂情緒分類的缺點。且藉由加入音色分析的部分,還可對於音樂中的語音部分作更細部的分析,有助於提高對於含有語音之音樂的情緒分類準確度。Therefore, the emotion classification method of the music of the present invention can conveniently establish an exclusive multi-category emotion prediction model that meets various needs according to the emotional classification difference of the music originally established by the user, and can improve the traditional music emotion classification system only according to the original The set standard analysis mode has the disadvantage of classifying music emotions. And by adding the part of the timbre analysis, it is also possible to perform a more detailed analysis of the speech part of the music, which helps to improve the emotional classification accuracy of the music containing the voice.

綜上所述,藉由以支援向量機對音樂之調性、音程、節奏與音色等音樂特徵參數,以及對該等音樂所設定之情緒形容詞參數進行統計分析的方式,可建構出一能夠準確對音樂進行情緒分類的多類別情緒預測模型,且可藉由音色分析提高對於具有語音內容之音樂的情緒分析準確度,此將有助於電玩遊戲者與電玩遊戲開發者快速獲取音樂之情緒分類,音樂開發者可針對特定遊戲情感內容或情境,設計出符合玩家需求的遊戲產品。In summary, by using the support vector machine to analyze the musical characteristics such as tonality, interval, rhythm and timbre of music, and the statistical analysis of the emotional adjective parameters set by the music, it is possible to construct an accurate A multi-category sentiment prediction model that classifies music emotions, and can improve the sentiment analysis accuracy of music with speech content by timbre analysis, which will help video game players and video game developers to quickly acquire emotional classification of music. Music developers can design game products that meet the needs of players for specific game emotional content or situations.

此外,本發明也可供使用者依據其自身聆聽感受所做之音樂樣本的情緒分類,透過本發明建立出一個專屬的多類別情緒預測模型,而可使日後對音樂的情緒分類結果更貼近本身喜好,而可改善傳統音樂情緒分類系統僅能依據其原先設定之制式分析模式進行音樂情緒分類的缺點,相當方便實用。因此,確實可達到本發明之目的。In addition, the present invention can also be used by the user to establish an exclusive multi-category emotion prediction model according to the emotional classification of the music samples made by the user according to his own listening experience, so that the emotional classification result of the music can be closer to itself in the future. Preferences, but can improve the traditional music sentiment classification system can only be based on its original set of analysis mode for music emotional classification of the shortcomings, quite convenient and practical. Therefore, the object of the present invention can be achieved.

惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及發明說明內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。The above is only the preferred embodiment of the present invention, and the scope of the invention is not limited thereto, that is, the simple equivalent changes and modifications made by the scope of the invention and the description of the invention are All remain within the scope of the invention patent.

圖1是本發明音樂的情緒分類方法之一較佳實施例的步驟流程圖。BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a flow chart showing the steps of a preferred embodiment of the emotional classification method for music of the present invention.

Claims (3)

一種音樂的情緒分類方法,適用於透過軟體及/或硬體的方式實施在一電子裝置中,而用以對音樂進行情緒分類,包含以下步驟:(A)對該電子裝置儲存之多首音樂樣本分別設定一對應的情緒形容詞參數;(B)對步驟(A)之每一首音樂樣本進行調性、節奏、音程與音色等音樂特徵分析,而對應每一首音樂樣本輸出多個音樂特徵參數;(C)以多類別支援向量機統計分析該等音樂樣本之情緒形容詞參數分別與對應之該等音樂特徵參數的關連性,而建構輸出一個多類別情緒預測模型;及(D)對該電子裝置後續儲存之每一首新音樂進行音樂特徵分析而對應輸出多個音樂特徵參數,並以該多類別情緒預測模型分析該新音樂之該等音樂特徵參數,而對該新音樂進行情緒分類,並輸出一對應該新音樂之情緒形容詞參數。A method for classifying emotions of music, which is applied to an electronic device through software and/or hardware, and used for emotional classification of music, comprising the following steps: (A) storing a plurality of music for the electronic device The sample respectively sets a corresponding emotional adjective parameter; (B) performs music feature analysis such as tonality, rhythm, interval and timbre for each music sample of step (A), and outputs a plurality of music features corresponding to each music sample. (C) statistically analyzing the relationship between the emotional adjective parameters of the music samples and the corresponding musical feature parameters by a multi-class support vector machine, and constructing a multi-category emotion prediction model; and (D) Each new music stored in the electronic device is subjected to music feature analysis to output a plurality of music feature parameters, and the music feature parameters of the new music are analyzed by the multi-category emotion prediction model, and the new music is emotionally classified. And output a pair of emotional adjective parameters that should be new music. 根據申請專利範圍第1項所述之音樂的情緒分類方法,還包含一步驟(E):依據步驟(D)分析出之該新音樂所對應的情緒形容詞參數,將該新音樂分類儲存,建立一音樂情緒分類資料庫。According to the emotional classification method of the music described in claim 1, the method further includes a step (E): analyzing the emotional adjective parameter corresponding to the new music according to the step (D), storing the new music classification, establishing A music emotion classification database. 根據申請專利範圍第1項所述之音樂的情緒分類方法,步驟(D)還會在分析出每一新音樂的情緒形容詞參數時,分別經由該電子裝置顯示出一影像以進行提醒。According to the emotion classification method of the music described in claim 1, the step (D) also displays an image via the electronic device for reminding when analyzing the emotional adjective parameters of each new music.
TW101148694A 2012-12-20 2012-12-20 Music emotion classification method TWI486949B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW101148694A TWI486949B (en) 2012-12-20 2012-12-20 Music emotion classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW101148694A TWI486949B (en) 2012-12-20 2012-12-20 Music emotion classification method

Publications (2)

Publication Number Publication Date
TW201426730A TW201426730A (en) 2014-07-01
TWI486949B true TWI486949B (en) 2015-06-01

Family

ID=51725632

Family Applications (1)

Application Number Title Priority Date Filing Date
TW101148694A TWI486949B (en) 2012-12-20 2012-12-20 Music emotion classification method

Country Status (1)

Country Link
TW (1) TWI486949B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10261963B2 (en) * 2016-01-04 2019-04-16 Gracenote, Inc. Generating and distributing playlists with related music and stories
US11969656B2 (en) * 2018-11-15 2024-04-30 Sony Interactive Entertainment LLC Dynamic music creation in gaming

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1577877A1 (en) * 2002-10-24 2005-09-21 National Institute of Advanced Industrial Science and Technology Musical composition reproduction method and device, and method for detecting a representative motif section in musical composition data

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1577877A1 (en) * 2002-10-24 2005-09-21 National Institute of Advanced Industrial Science and Technology Musical composition reproduction method and device, and method for detecting a representative motif section in musical composition data

Also Published As

Publication number Publication date
TW201426730A (en) 2014-07-01

Similar Documents

Publication Publication Date Title
CN108806656B (en) Automatic generation of songs
CN108806655B (en) Automatic generation of songs
WO2020177190A1 (en) Processing method, apparatus and device
Wang et al. Modeling the affective content of music with a Gaussian mixture model
WO2019232928A1 (en) Musical model training method, music creation method, devices, terminal and storage medium
CN106128479B (en) A kind of performance emotion identification method and device
CN113010138B (en) Article voice playing method, device and equipment and computer readable storage medium
Ottl et al. Group-level speech emotion recognition utilising deep spectrum features
Lee et al. System for matching paintings with music based on emotions
CN106383676A (en) Instant photochromic rendering system for sound and application of same
Deb et al. Fourier model based features for analysis and classification of out-of-breath speech
Bedoya et al. Even violins can cry: specifically vocal emotional behaviours also drive the perception of emotions in non-vocal music
Xu et al. Paralinguistic singing attribute recognition using supervised machine learning for describing the classical tenor solo singing voice in vocal pedagogy
TWI486949B (en) Music emotion classification method
Khurana et al. Tri-integrated convolutional neural network for audio image classification using Mel-frequency spectrograms
CN116959393B (en) Training data generation method, device, equipment and medium of music generation model
CN109410972A (en) Generate the method, apparatus and storage medium of sound effect parameters
US20230343321A1 (en) Method and apparatus for processing virtual concert, device, storage medium, and program product
Huang et al. Research on music emotion intelligent recognition and classification algorithm in music performance system
Xu et al. Source separation improves music emotion recognition
Xu et al. Launchpadgpt: Language model as music visualization designer on launchpad
Liu et al. Research on the Correlation Between the Timbre Attributes of Musical Sound and Visual Color
TWI482149B (en) The Method of Emotional Classification of Game Music
Chimthankar Speech Emotion Recognition using Deep Learning
Liu et al. Emotion Recognition of Violin Music based on Strings Music Theory for Mascot Robot System.

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees