CN113377997B - Song retrieval method, electronic equipment and computer readable storage medium - Google Patents

Song retrieval method, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113377997B
CN113377997B CN202110741923.2A CN202110741923A CN113377997B CN 113377997 B CN113377997 B CN 113377997B CN 202110741923 A CN202110741923 A CN 202110741923A CN 113377997 B CN113377997 B CN 113377997B
Authority
CN
China
Prior art keywords
target
word
search
training
song
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110741923.2A
Other languages
Chinese (zh)
Other versions
CN113377997A (en
Inventor
万鑫瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Music Entertainment Technology Shenzhen Co Ltd
Original Assignee
Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Music Entertainment Technology Shenzhen Co Ltd filed Critical Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority to CN202110741923.2A priority Critical patent/CN113377997B/en
Publication of CN113377997A publication Critical patent/CN113377997A/en
Application granted granted Critical
Publication of CN113377997B publication Critical patent/CN113377997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a song retrieval method, electronic equipment and a computer readable storage medium, which are used for acquiring target song retrieval information; obtaining target music feature vectors of all target search words in target song search information; determining the target front-rear relationship of each target search term in the target song search information; determining target word weights of all target search words based on target music feature vectors and target front-rear relations of all target search words; and determining a song retrieval result corresponding to the target song retrieval information based on the target retrieval word and the weight of the target word corresponding to the target retrieval word. The application can determine the target word weight based on the target music feature vector and the target front-rear relation, which is equivalent to determining the target word weight according to the importance degree of the search word and the importance degree of the search word in the target song search information, thereby improving the determination accuracy of the target word weight, and further improving the accuracy of song search when the song search is carried out based on the target word weight.

Description

Song retrieval method, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of information processing technology, and more particularly, to a song retrieval method, an electronic device, and a computer-readable storage medium.
Background
Currently, in the song retrieval process, it may be necessary to calculate the word weight of the search word, and determine the corresponding song retrieval result according to the word weight, for example, determine the word weight of each search word in the song retrieval information first, determine the priority search word according to the magnitude value of the word weight, and output the retrieval result corresponding to the priority search word. In the process, the word weight can be calculated by a query-qanchor method, namely, all the song retrieval information clicked to the same song retrieval result are connected, and the word weight is calculated by judging the occurrence frequency of the words in the song retrieval information. However, the query-qanchor method relies on the existing association relationship between the song search information and the to-be-processed result, but misleading information exists in the existing association relationship, for example, two dissimilar song search information can obtain the same song search result, the difference between the song search information and the song search result is large, and the like, so that the accuracy of song search is poor.
In view of the above, how to accurately perform song search is a problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a song retrieval method which can solve the technical problem of how to accurately retrieve songs to a certain extent. The application also provides electronic equipment and a computer readable storage medium.
In a first aspect, the present application discloses a song retrieval method, comprising:
acquiring target song retrieval information;
Performing normalization processing on each target search word in the target song search information to obtain a target music feature vector of each target search word, wherein the target music feature vector is used for representing the characteristics of the target search word in the music field;
Inputting each target search term into a language neural network model, and obtaining the target front-rear relationship of each target search term in the target song search information;
determining target word weights of the target search words based on the target music feature vectors and the target front-rear relations of the target search words;
And determining a song retrieval result corresponding to the target song retrieval information based on the target retrieval word and the corresponding target word weight.
Optionally, the target music feature vector at least comprises one of initial word weight, named entity identification feature and compactness;
wherein the initial word weight comprises a word weight determined based on a query-qanchor method; the named entity identification feature is used for representing the category of the target search term in the music field; the compactness is used for representing the co-occurrence probability of the target retrieval word in the music field.
Optionally, the inputting each target term into the language neural network model, and obtaining the target context of each target term in the target song search information includes:
determining word embedding characteristic information of each target search word;
inputting the word embedded feature information into the language neural network model, and obtaining deep features corresponding to the word embedded feature information;
And taking the deep feature as the target context of the corresponding target search term.
Optionally, the determining the target word weight of each target term based on the target music feature vector and the target context of each target term includes:
vector stitching is carried out on the target music feature vector corresponding to the target retrieval word and the target front-rear relation to obtain a corresponding target vector stitching result;
And determining the target word weight of the target search word based on the target vector splicing result.
Optionally, the determining the target word weight of the target search word based on the target vector concatenation result includes:
and inputting the target vector splicing result into a target neural network model to acquire the target word weight of the target search word.
Optionally, the vector splicing is performed on the target music feature vector corresponding to the target search term and the target front-back relationship to obtain a corresponding target vector splicing result, which includes:
Vector splicing is carried out on the target music feature vector corresponding to the target retrieval word and the target front-rear relation through a predetermined word weight calculation formula, and a corresponding target vector splicing result is obtained;
The word weight calculation formula comprises:
θ=Wa1*α+Wa2*β+b;
Wherein θ represents the target vector concatenation result; alpha represents numerical information corresponding to the target music feature vector; w a1 represents a splicing weight value corresponding to the target music feature vector; beta represents numerical information corresponding to the target context; w a2 represents a splicing weight value corresponding to the target context; b represents a preset constant value.
Optionally, the determining, based on the target search term and the target term weight corresponding to the target search term, a song search result corresponding to the target song search information includes:
Judging whether the weight of the target word is larger than a preset numerical value or not;
If the target word weight is larger than the preset numerical value, classifying the target search word corresponding to the target word weight as a target necessary word;
If the target word weight is smaller than or equal to the preset numerical value, classifying the target search word corresponding to the target word weight as a target unnecessary word;
And searching the songs based on the target necessary stay word to obtain the song searching result.
Optionally, the determining process of the word weight calculation formula includes:
Acquiring training song retrieval information;
Determining training music feature vectors of all training search words in the training song search information;
inputting each training search term into the language neural network model, and acquiring the training front-back relation of each training search term in the training song search information;
Acquiring an initial word weight calculation formula;
Based on the initial word weight calculation formula, vector splicing is carried out on the training music feature vector corresponding to the training retrieval word and the relation before and after training, and a corresponding training vector splicing result is obtained;
Determining training word weights of the training search words based on the training vector splicing results;
Judging whether the weight of the training word is larger than the preset value; if the training word weight is larger than the preset numerical value, classifying the training search word corresponding to the training word weight as a training necessary word; if the training word weight is smaller than or equal to the preset numerical value, classifying the training search word corresponding to the training word weight as a training unnecessary word;
determining a loss value of the word weight calculation formula based on the training obligatory word and the training unnecessary word;
And adjusting the word weight calculation formula based on the loss value until the word weight calculation formula trained in advance is obtained.
Optionally, the determining the loss value of the word weight calculation formula based on the training obligatory word and the training unnecessary word includes:
determining a loss value of the word weight calculation formula based on the training obligatory word and the training unnecessary word by using the loss value calculation formula;
the loss value calculation formula includes:
Loss(query)=MSE(Final(terma1))+Softmax(Final(terma2));
Wherein Loss (query) represents the Loss value; final (term a1) represents the training word weight of the training must-leave word; MSE represents the mean square error function; final (term a2) represents the training word weight of the unnecessary word; softmax represents a logistic regression function.
In a second aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
A processor for implementing the steps of the song retrieval method as described in any one of the above when executing the computer program.
In a third aspect, the present application discloses a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of the song retrieval method as described in any of the above.
In the application, after the target song retrieval information is acquired, the target music feature vector and the target front-rear relation of each target retrieval word in the target song retrieval information are required to be determined, because the target music feature vector can reflect the characteristics of the target retrieval word in the music field, the characteristics can reflect the importance degree of the target retrieval word, the target front-rear relation can reflect the appearance sequence of the target retrieval word in the target song retrieval information, and the appearance sequence can reflect the importance degree of the target retrieval word in the target song retrieval information, the method and the device are equivalent to determining the weight of the target word according to the importance degree of the target retrieval word and the importance degree of the target retrieval word in the target song retrieval information when determining the weight of the target retrieval word based on the target music feature vector and the target front-rear relation. The electronic device and the computer readable storage medium disclosed by the application also solve the corresponding technical problems.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system framework to which the song retrieval scheme provided by the present application is applied;
FIG. 2 is a flowchart of a song retrieval method according to an embodiment of the present application;
FIG. 3 is a graph of information applied in the process of computing word weights based on the query-qanchor method;
FIG. 4 is a flowchart of a song retrieval method according to an embodiment of the present application;
FIG. 5 is a flowchart of a song retrieval method according to an embodiment of the present application;
FIG. 6 is a flowchart of a song retrieval method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of data processing in an embodiment of the present application;
FIG. 8 is a flowchart for determining a word weight calculation formula according to the present application;
FIG. 9 is a schematic diagram of a song search apparatus according to the present application;
Fig. 10 is a block diagram of an electronic device 20, shown in accordance with an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Currently, in the song retrieval process, it may be necessary to calculate the word weight of the search word, and determine the corresponding song retrieval result according to the word weight, for example, determine the word weight of each search word in the song retrieval information first, determine the priority search word according to the magnitude value of the word weight, and output the retrieval result corresponding to the priority search word. In the process, the word weight can be calculated by a query-qanchor method, namely, all the song retrieval information clicked to the same song retrieval result are connected, and the word weight is calculated by judging the occurrence frequency of the words in the song retrieval information. However, the query-qanchor method relies on the existing association relationship between the song search information and the to-be-processed result, but misleading information exists in the existing association relationship, for example, two dissimilar song search information can obtain the same song search result, the difference between the song search information and the song search result is large, and the like, so that the accuracy of song search is poor. In order to overcome the technical problems, the application provides a word weight determining method which can improve the accuracy of song retrieval.
In the song searching scheme of the present application, the system framework adopted may specifically be as shown in fig. 1, and may specifically include: a background server 01 and a number of clients 02 establishing a communication connection with the background server 01.
In the application, the background server 01 is used for executing the steps of a song retrieval method, which comprises the steps of obtaining target song retrieval information; carrying out normalization processing on each target search word in target song search information to obtain a target music feature vector of each target search word, wherein the target music feature vector is used for representing the characteristics of the target search word in the music field; inputting each target search term into the language neural network model, and obtaining the target front-rear relation of each target search term in the target song search information; determining target word weights of all target search words based on target music feature vectors and target front-rear relations of all target search words; and determining a song retrieval result corresponding to the target song retrieval information based on the target retrieval word and the weight of the target word corresponding to the target retrieval word.
Further, the background server 01 may be further provided with a song retrieval information database, a music feature vector database, a context database, a word weight database, and a song retrieval result database. Wherein the song retrieval information database is used for storing various song retrieval information. The music feature vector database can be used for storing the music feature vector of each search word in the song search information. The context database can be used for storing the context of each search term in the song search information. The word weight database can store the word weight value of each search word determined by the method. The song search result database may store song search results determined by the present method. It can be understood that after the song retrieval is performed by the song retrieval determining scheme of the present application, if the data such as the word weight is not used, the corresponding information stored in each database can be deleted, so as to store the information required for the next song retrieval, and each database serves the next song retrieval process.
Of course, the application can also set the information database in the service server of the third party, and the service server can be used for specially collecting the song retrieval information uploaded by the service end. In this way, when the background server 01 needs to use the song search information, the corresponding song search information may be obtained by initiating a corresponding information call request to the service server. In the present application, the background server 01 may respond to a song search request from one or more clients 02, or the like.
Fig. 2 is a flowchart of a song searching method according to an embodiment of the present application. Referring to fig. 2, the song retrieval method includes:
step S101: and obtaining target song retrieval information.
In this embodiment, the target song search information refers to search information input by a user or the like when searching songs, and the content of the search information may be determined according to actual needs, for example, the target song search information may include an author, a keyword, a movie name, and the like of a target song to be searched.
Step S102: and carrying out normalization processing on each target search word in the target song search information to obtain a target music feature vector of each target search word, wherein the target music feature vector is used for representing the characteristics of the target search word in the music field.
In this embodiment, because the word weight to be calculated is used to represent the importance degree of the target search word, and the importance degree of the word may be determined by the feature of the word, and the scheme is to search songs, and songs belong to the music field, so that the feature of the target search word in the music field can reflect the importance degree of the target search word in the song search process, after the target song search information is obtained, the target music feature vector of each target search word in the target song search information is also required to be obtained, and in this process, the target music feature vector of each target search word can be obtained by performing normalization processing on each target search word in the target song search information, where the normalization processing refers to changing the dimensionality expression into the dimensionless expression.
It will be appreciated that the type of music feature vector may be determined according to a particular application scenario, e.g., the music feature vector may include at least one of an initial word weight, a named entity recognition (ner) feature, compactness, etc.; and the initial word weight can include a weight determined based on a query-qanchor method, and the like; the named entity recognition feature is used for representing the category of the target search term in the music field; the compactness is used to represent co-occurrence probabilities of target terms in the music domain. The named entity recognition (ner) is also called entity recognition, entity blocking and entity extraction, is a subtask of information extraction, and aims to locate and classify named entities in a text into predefined categories, such as personnel, organizations, positions, time expressions, quantity, currency values, percentages and the like, and in practical application, named entity recognition characteristics of a target to-be-retrieved word can be determined through a statistical method, machine learning, statistics and machine learning, LSTM (Long Short Term Memory-Term Memory) and the like; the compactness of the target to-be-retrieved word refers to the probability that the target to-be-retrieved word co-appears in the music field, taking song retrieval information as an example of my baby, the 'I' and 'are compact in the music field, the' I 'and' are not compact in the non-music field, the 'I' and 'II' are compact in the music field, and the 'II' are not compact in the non-music field, so that the compactness contained in the music feature vector can accurately reflect the characteristics of the retrieval word in the music field, and particularly, in the process of calculating the compactness of the target to-be-retrieved word, the compactness can be calculated based on a pre-determined compactness dictionary in the music field, and the process can refer to related technologies, and the application is not repeated herein.
It should be noted that, the relationship of the information applied in the process of calculating the word weight based on the query-qanchor method may be as shown in fig. 3, where in fig. 3, the query represents a user query string, that is, target information; doc represents a click result corresponding to the query, namely a processing result of the target information; qanchor denotes a collection of all queries clicking doc; p represents a probability value; the process of calculating word weights based on the query-qanchor method may include the steps of:
Step S1021: calculating the probability P (doc i/query) of clicking doc under the search query by a first calculation formula;
the first calculation formula includes:
wherein click represents the corresponding number of clicks; n represents the total number of docs clicked under the search query;
Step S1022: calculating a probability P of qanchor of clicking doc through a second calculation formula (qanchor t/doci);
the second calculation formula includes:
wherein m represents the total number of qanchor clicks on doc;
Step S1023: connecting all queries clicking the same doc with qanchor through a third calculation formula to obtain probability P (qanchor t/query) of qanchor under the search query;
the third calculation formula includes:
P(qanchort/query)=∑docP(doci/query)*P(qanchort/doci);
Step S1024: determining word weight of word term in the query through a fourth calculation formula;
the fourth calculation formula includes:
P (term/query) = Σ qanchorP(qanchort/query) ×β; wherein β represents whether term appears in the query, and in a specific application scenario, when term appears in the query, β=1; when term does not occur in query, β=0.
As can be seen from the calculation process of the query-qanchor method, the dependency is that there is an association relationship between the query and doc already existing, but misleading information exists in the existing association relationship, and the doc is B, the representative information of A is B, for example, A is singer name, B is song name, at this time, the probability of clicking doc under search A is higher than the probability of clicking doc by directly searching doc, that is, the word weight of A is 0.5, the word weight of B is 1, but the word weight of true A is 1, and the word weight of B is 0.5; for example, the doc is clicked, but the information in one query is "AB", the information in the other query is "A", and the related information also causes the real word weight to be different from the theoretically expected word weight, so that the word weight calculation result is inaccurate; in addition, if there is no association relationship between a query and doc before, or if there are more words irrelevant to doc in the query, the accuracy of the finally calculated word weight is further poor. In the present application, therefore, in order to ensure the accuracy of the word weights, it is necessary to calculate the target word weights by applying word characteristic information including the initial word weights determined based on the query-qanchor method.
Step S103: and inputting each target search term into the language neural network model, and acquiring the target front-rear relation of each target search term in the target song search information.
In this embodiment, because there is a logic relationship between terms of the target song search information, for example, the target song search information is obtained by combining and sorting the target search terms according to a certain logic, the logic relationship can be reflected by the front-rear relationship of the target search terms in the target song search information, and the logic relationship can reflect the importance degree of the target search terms in the song search information, for example, the importance degree of the search terms serving as modifier is lower than that of the search terms serving as result, and the search terms serving as modifier and the search terms serving as result have obvious front-rear relationship.
Step S104: and determining the target word weight of each target search word based on the target music feature vector and the target front-rear relation of each target search word.
In this embodiment, the target word weight, that is, the word weight of the target search word, reflects the importance degree of the target search word in the target song search information, and the importance degree may affect the song search result of the target song search information, for example, when the word weight is accurate, the song search is preferentially performed according to the target search word with the highest target word weight, so that the song search result may be obtained more quickly and more accurately.
Step S105: and determining a song retrieval result corresponding to the target song retrieval information based on the target retrieval word and the weight of the target word corresponding to the target retrieval word.
In this embodiment, after determining the target word weights of the target search words, the importance degrees of the target search words can be represented by the target word weights, so that the song search results corresponding to the target song search information can be quickly and accurately determined based on the target search words and the corresponding target word weights. It should be noted that, after determining the song search result, the song search result may be displayed and fed back to the searcher, and the application is not limited in detail herein.
In the application, after the target song retrieval information is acquired, the target music feature vector and the target front-rear relation of each target retrieval word in the target song retrieval information are required to be determined, because the target music feature vector can reflect the characteristics of the target retrieval word in the music field, the characteristics can reflect the importance degree of the target retrieval word, the target front-rear relation can reflect the appearance sequence of the target retrieval word in the target song retrieval information, and the appearance sequence can reflect the importance degree of the target retrieval word in the target song retrieval information, the method and the device are equivalent to determining the weight of the target word according to the importance degree of the target retrieval word and the importance degree of the target retrieval word in the target song retrieval information when determining the weight of the target retrieval word based on the target music feature vector and the target front-rear relation.
Fig. 4 is a flowchart of a song searching method according to an embodiment of the present application. Referring to fig. 4, the song retrieval method includes:
step S201: and obtaining target song retrieval information.
Step S202: and carrying out normalization processing on each target search word in the target song search information to obtain a target music feature vector of each target search word, wherein the target music feature vector is used for representing the characteristics of the target search word in the music field.
Step S203: and determining word embedding characteristic information of each target search word.
In this embodiment, in order to quickly determine the front-to-back relationship of each target search term in the target song search information, the front-to-back relationship may be determined based on the existing language neural network model, in this process, since the language neural network model is mostly a standard machine learning algorithm, in order to determine the front-to-back relationship based on the language neural network model, the word embedding (Word Embedding) feature information of each target search term needs to be determined first, where the word embedding is a method of converting a word in a text into a digital vector, that is, a vector that is in conversion into a digital form is processed in a digital form, the word embedding process is to embed a high-dimensional space with a dimension of all word numbers into a continuous vector space with a dimension of much lower, each word or phrase is mapped into a vector on a real number domain, and the word embedding result generates a word vector, and in practical application, the word embedding feature information of the target word may be determined by means of One-hot encoding (independent heat encoding), information retrieval (Information Retrieval, IR, distributed representation, etc.
Step S204: and inputting the word embedded feature information into a language neural network model, and obtaining the deep feature corresponding to each word embedded feature information.
Step S205: and taking the deep feature as the target context of the corresponding target search term.
In this embodiment, after determining the word embedded feature information of each target search word, the word embedded feature information may be input to the language neural network model to obtain the deep feature corresponding to each word embedded feature information, where the deep feature may represent the target context of the target search word, and in a specific application scenario, the deep feature may be directly used as the target context of the corresponding target search word, or of course, the deep feature may be further processed, and the processing result may be used as the target context of the target search word.
It should be noted that, the language neural network model applied in the embodiment may be built based on neural networks such as LSTM (Long Short-Term Memory) and BiLstm (Bi-directional Long Short-Term Memory), and the structure and training process of the language neural network model may refer to the related art, and the application is not limited in detail herein.
Step S206: and determining the target word weight of each target search word based on the target music feature vector and the target front-rear relation of each target search word.
Step S207: and determining a song retrieval result corresponding to the target song retrieval information based on the target retrieval word and the weight of the target word corresponding to the target retrieval word.
Fig. 5 is a flowchart of a song searching method according to an embodiment of the present application. Referring to fig. 5, the song retrieval method includes:
step S301: and obtaining target song retrieval information.
Step S302: and carrying out normalization processing on each target search word in the target song search information to obtain a target music feature vector of each target search word, wherein the target music feature vector is used for representing the characteristics of the target search word in the music field.
Step S303: and inputting each target search term into the language neural network model, and acquiring the target front-rear relation of each target search term in the target song search information.
Step S304: and vector splicing is carried out on the target music feature vector corresponding to the target retrieval word and the target front-back relation, so that a corresponding target vector splicing result is obtained.
Step S305: and determining the target word weight of the target search word based on the target vector splicing result.
In this embodiment, because the target music feature vector and the target context relationship both affect the target word weight of the target search word, and the target music feature vector and the target context relationship describe the importance degree of the target search word in the target song search information from two different aspects, in the process of determining the target word weight of each target search word based on the target music feature vector and the target context relationship of each target search word, vector stitching can be performed on the target music feature vector and the target context relationship corresponding to the target search word, so as to obtain a corresponding target vector stitching result, and then the target word weight of the target search word is determined based on the target vector stitching result.
It can be understood that in the process of determining the target word weight of the target search word based on the target vector splicing result, the target vector splicing result can be directly input into the target neural network model to obtain the target word weight of the target search word, that is, the target word weight corresponding to the target vector splicing result can be determined by means of the target neural network model, and description such as the structure of the target neural network model can refer to the prior art.
It can be understood that, in the process of performing vector splicing on the target music feature vector corresponding to the target retrieval word and the target front-rear relationship to obtain the corresponding target vector splicing result, the following steps may be included:
Vector splicing is carried out on the target music feature vector corresponding to the target retrieval word and the front-back relation of the target through a predetermined word weight calculation formula, and a corresponding target vector splicing result is obtained;
The word weight calculation formula includes:
θ=Wa1*α+Wa2*β+b;
Wherein θ represents a target vector concatenation result; alpha represents numerical information corresponding to the target music feature vector; w a1 represents a splicing weight value corresponding to the target music feature vector; beta represents numerical information corresponding to the front-back relation of the target; w a2 represents a splicing weight value corresponding to the front-back relation of the target; b represents a preset constant value.
Step S306: and determining a song retrieval result corresponding to the target song retrieval information based on the target retrieval word and the weight of the target word corresponding to the target retrieval word.
Fig. 6 is a flowchart of a song retrieving method according to an embodiment of the present application, and fig. 7 is a schematic diagram of processing data according to an embodiment of the present application. Referring to fig. 6, the song retrieval method includes:
step S401: and obtaining target song retrieval information.
Step S402: and carrying out normalization processing on each target search word in the target song search information to obtain a target music feature vector of each target search word, wherein the target music feature vector is used for representing the characteristics of the target search word in the music field.
Step S403: and inputting each target search term into the language neural network model, and acquiring the target front-rear relation of each target search term in the target song search information.
Step S404: and determining the target word weight of each target search word based on the target music feature vector and the target front-rear relation of each target search word.
Step S405: judging whether the weight of the target word is larger than a preset numerical value or not; if the weight of the target word is greater than the preset value, step S406 is executed; if the target word weight is less than or equal to the preset value, step S408 is performed.
Step S406: and classifying the target retrieval word corresponding to the target word weight as a target obligatory word, and executing step S407.
Step S407: and searching songs based on the target necessary word to obtain song searching results.
Step S408: and classifying the target retrieval words corresponding to the target word weights as target unnecessary reserved words.
In this embodiment, after determining the target word weights of each target search word based on the target music feature vector and the target front-rear relationship of each target search word, in determining the song search result corresponding to the target song search information based on the target search word and the corresponding target word weights, the target search word participating in the song search may be determined according to the target word weights, that is, the necessary word is determined, and then song search is performed according to the necessary word, so as to obtain a song search structure that best meets the user requirement, for example, songs containing information of all target necessary words are determined as song search results. In this process, a preset numerical value for distinguishing the obligatory word from the non-obligatory word may be predetermined, and then whether the target word weight is greater than the preset numerical value is determined; if the target word weight is greater than the preset value, classifying the target retrieval word corresponding to the target word weight as a target obligatory word, wherein the target obligatory word is used for representing the target retrieval word as the obligatory word; if the target word weight is smaller than or equal to a preset numerical value, the target search word corresponding to the target word weight can be classified into a target unnecessary search word so as to represent that the target search word needs to participate in song search, wherein the target unnecessary search word is used for representing that the target search word is an unnecessary search word so as to represent that the target search word does not need to participate in song search; thus, the target obligatory word and the target unnecessary word are rapidly determined. It is to be understood that the preset value may be determined empirically, or may be determined according to the word weight of this time, etc., which is not particularly limited herein.
FIG. 8 is a flowchart illustrating the determination of a word weight calculation formula according to the present application. Referring to fig. 8, the determining process of the word weight calculation formula may include:
step S501: training song retrieval information is obtained.
In this embodiment, the training song search information may be determined by the user in the history search information of songs, or may be written by the user in real time according to the target songs, or the like, which is not particularly limited herein.
Step S502: and determining training music feature vectors of all training search words in the training song search information.
Step S503: and inputting each training search word into the language neural network model to acquire the training front-back relation of each training search word in the training song search information.
In this embodiment, because the basis is the music feature vector and the front-back relationship of the search word in the process of determining the word weight, in the process of determining the word weight calculation formula based on the training song search information, the training music feature vector of each training search word in the training song search information, and the front-back relationship of each training search word in the training song search information need to be determined, and the training music feature vector is the music feature vector corresponding to the training search word, and the front-back relationship of the training music feature vector is the front-back relationship information corresponding to the training search word, and the type and the determining method thereof can refer to the corresponding description of the target music feature vector and the target front-back relationship.
Step S504: an initial word weight calculation formula is obtained.
Step S505: and carrying out vector splicing on training music feature vectors corresponding to training search terms and the relation before and after training based on an initial word weight calculation formula to obtain corresponding training vector splicing results.
Step S506: and determining the training word weight of the training search word based on the training vector splicing result.
In this embodiment, after determining the training music feature vector and the pre-post relation of the training search term, an initial term weight calculation formula may be obtained, vector splicing is performed on the training music feature vector and the pre-post relation corresponding to the training search term based on the initial term weight calculation formula, a corresponding training vector splicing result is obtained, and then the training term weight of the training search term, that is, the term weight of the training search term, is determined based on the training vector splicing result, so that the initial term weight calculation formula is adjusted according to the training term weight.
Step S507: judging whether the weight of the training word is larger than a preset numerical value or not; if the weight of the training word is greater than the preset value, executing step S508; if the training word weight is less than or equal to the preset value, step S509 is performed.
Step S508: and classifying training search words corresponding to the training word weights as training obligatory words.
Step S509: and classifying training search words corresponding to the training word weights as training unnecessary reserved words.
In this embodiment, after determining the weight of the training word, the training word may still be classified into a training obligatory word and a training non-obligatory word by a preset numerical value, where the training obligatory word represents the training word as the obligatory word, and the training non-obligatory word represents the training word as the non-obligatory word.
Step S510: the loss value of the word weight calculation formula is determined based on the training obligatory word and the training unnecessary word.
Step S511: and adjusting the word weight calculation formula based on the loss value until a pre-trained word weight calculation formula is obtained.
In this embodiment, after determining that the training is necessary to leave the word and the training is unnecessary to leave the word, in order to adjust the word weight calculation formula, a loss value of the word weight calculation formula may be determined based on the training is necessary to leave the word and the training is unnecessary to leave the word, and then the word weight calculation formula may be adjusted based on the loss value until a pre-trained word weight calculation formula is obtained, specifically, the loss value may be compared with a preset value, and if it is determined that the word weight calculation formula does not meet the requirement according to the comparison result, the word weight calculation formula may be adjusted based on the loss value until a word weight calculation formula meeting the requirement is obtained, that is, the pre-trained word weight calculation formula is obtained.
It will be appreciated that in determining the loss value of a word weight calculation formula based on training must-stay words and training non-must-stay words, the loss value may be calculated by a loss function, such as by the formula:
Loss(query)=MSE(Final(terma1))+Softmax(Final(terma2))
To calculate a Loss value, wherein Loss (query) represents the Loss value; final (term a1) represents training word weights for training the must-stay word; MSE (Mean Square Error) denotes the mean square error function; final (term a2) represents training word weights for training unnecessary words; softmax (Softmax logical regression) represents a logistic regression function. Of course, there may be other ways of calculating the loss value, and the present application is not limited in detail herein.
In addition, comparing the loss value with a preset value, if the word weight calculation formula is determined to not meet the requirement according to the comparison result, judging whether the loss value is smaller than the preset value or not in the process of adjusting the word weight calculation formula based on the loss value, and if the loss value is smaller than the preset value, determining that the word weight calculation formula meets the requirement; if the loss value is greater than or equal to the preset value, it can be determined that the word weight calculation formula does not meet the requirement, and at the moment, the word weight calculation formula needs to be adjusted, and the method is repeated until the word weight calculation formula meeting the requirement is obtained.
To facilitate an understanding of the present application, the scheme described herein will now be described in terms of a song retrieval process in the field of music, which may include the steps of:
acquiring target song retrieval information input by a user;
performing normalization processing on each target search word in the target song search information to obtain a target music feature vector of each target search word, wherein the target music feature vector comprises: initial word weight, named entity recognition feature and compactness, wherein the initial word weight comprises a weight determined based on a query-qanchor method;
determining word embedding characteristic information of each target search word;
Inputting word embedded feature information into a language neural network model, and obtaining deep features corresponding to the word embedded feature information;
Taking the deep feature as a target front-back relation of a corresponding target search term;
Vector splicing is carried out on the target music feature vector corresponding to the target retrieval word and the front-back relation of the target, and a corresponding target vector splicing result is obtained;
Inputting the target vector splicing result into a target neural network model to obtain target word weight of a target search word;
Judging whether the weight of the target word is larger than a preset numerical value or not; if the weight of the target word is greater than a preset value, classifying the target retrieval word corresponding to the weight of the target word as a target necessary word; if the target word weight is smaller than or equal to the preset numerical value, classifying the target search word corresponding to the target word weight as a target unnecessary word;
and searching songs based on the target necessary word to obtain song searching results.
Referring to fig. 9, the embodiment of the application also discloses a song retrieval device correspondingly, which is applied to a background server and comprises:
an information acquisition module 101 for acquiring target song retrieval information;
The music feature vector determining module 102 is configured to normalize each target search term in the target song search information to obtain a target music feature vector of each target search term, where the target music feature vector is used to represent characteristics of the target search term in the music field;
A front-rear relationship determining module 103, configured to input each target search term to a language neural network model, and obtain a target front-rear relationship of each target search term in the target song search information;
The word weight determining module 104 is configured to determine a target word weight of each target search word based on the target music feature vector and the target front-rear relationship of each target search word;
The song search result determining module 105 is configured to determine a song search result corresponding to the target song search information based on the target search term and the target term weight corresponding to the target search term.
Therefore, the application determines the target word weight based on the target music feature vector and the target front-rear relation, which is equivalent to determining the target word weight according to the importance degree of the target word and the importance degree of the target word in the target song retrieval information, compared with the prior art which determines the word weight only according to the occurrence frequency, the application can improve the determination accuracy of the word weight, and then can improve the accuracy of the song retrieval result based on the target word weight.
In some embodiments, the target musical feature vector includes at least one of an initial word weight, a named entity recognition feature, and a compactness;
The initial word weight comprises a word weight determined based on a query-qanchor method; the named entity recognition feature is used for representing the category of the target search term in the music field; the compactness is used to represent co-occurrence probabilities of target terms in the music domain.
In some embodiments, the context determination module may be specifically configured to: determining word embedding characteristic information of each target search word; inputting word embedded feature information into a language neural network model, and obtaining deep features corresponding to the word embedded feature information; and taking the deep feature as the target context of the corresponding target search term.
In some embodiments, the word weight determination module may be specifically configured to: vector splicing is carried out on the target music feature vector corresponding to the target retrieval word and the front-back relation of the target, and a corresponding target vector splicing result is obtained; and determining the target word weight of the target search word based on the target vector splicing result.
In some embodiments, the word weight determination module may be specifically configured to: and inputting the target vector splicing result into a target neural network model to obtain the target word weight of the target search word.
In some embodiments, the word weight determination module may be specifically configured to:
Vector splicing is carried out on the target music feature vector corresponding to the target retrieval word and the front-back relation of the target through a predetermined word weight calculation formula, and a corresponding target vector splicing result is obtained;
The word weight calculation formula includes:
θ=Wa1*α+Wa2*β+b;
Wherein θ represents a target vector concatenation result; alpha represents numerical information corresponding to the target music feature vector; w a1 represents a splicing weight value corresponding to the target music feature vector; beta represents numerical information corresponding to the front-back relation of the target; w a2 represents a splicing weight value corresponding to the front-back relation of the target; b represents a preset constant value.
In some embodiments, the song search result determination module may be specifically configured to: judging whether the weight of the target word is larger than a preset numerical value or not; if the weight of the target word is greater than a preset value, classifying the target retrieval word corresponding to the weight of the target word as a target necessary word; if the target word weight is smaller than or equal to the preset numerical value, classifying the target search word corresponding to the target word weight as a target unnecessary word; and searching songs based on the target necessary word to obtain song searching results.
In some specific embodiments, the word weight determining apparatus may further include:
The word weight calculation formula determining module is used for: acquiring training song retrieval information; determining training music feature vectors of each training search word in training song search information; inputting each training search word into a language neural network model, and acquiring the training front-back relation of each training search word in training song search information; acquiring an initial word weight calculation formula; based on an initial word weight calculation formula, vector splicing is carried out on training music feature vectors corresponding to training search words and the relation before and after training, and corresponding training vector splicing results are obtained; determining training word weights of training search words based on training vector splicing results; judging whether the weight of the training word is larger than a preset numerical value or not; if the weight of the training word is greater than the preset value, classifying the training search word corresponding to the weight of the training word as a training obligatory word; if the training word weight is smaller than or equal to the preset numerical value, classifying the training search word corresponding to the training word weight as a training unnecessary word; determining a loss value of a word weight calculation formula based on the training obligatory word and the training unnecessary word; and adjusting the word weight calculation formula based on the loss value until a pre-trained word weight calculation formula is obtained.
In some embodiments, the term weight calculation formula determination module may be specifically configured to: determining a loss value of a word weight calculation formula based on training obligatory words and training unnecessary words through the loss value calculation formula;
the loss value calculation formula includes:
Loss(query)=MSE(Final(terma1))+Softmax(Final(terma2));
Wherein Loss (query) represents a Loss value; final (term a1) represents training word weights for training the must-stay word; MSE represents the mean square error function; final (term a2) represents training word weights of unnecessary words; softmax represents a logistic regression function.
Further, the embodiment of the application also provides electronic equipment. Fig. 10 is a block diagram of an electronic device 20, according to an exemplary embodiment, and the contents of the diagram should not be construed as limiting the scope of use of the present application in any way.
Fig. 10 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present application. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is configured to store a computer program that is loaded and executed by the processor 21 to implement the relevant steps in the song retrieval method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be a server.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein; the input/output interface 25 is used for acquiring external input data or outputting external output data, and the specific interface type thereof may be selected according to the specific application requirement, which is not limited herein.
The memory 22 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon may include an operating system 221, a computer program 222, video data 223, and the like, and the storage may be temporary storage or permanent storage.
The operating system 221 is used for managing and controlling various hardware devices on the electronic device 20 and the computer program 222, so as to implement the operation and processing of the processor 21 on the massive video data 223 in the memory 22, which may be Windows Server, netware, unix, linux, etc. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the song retrieval method performed by the electronic device 20 disclosed in any of the previous embodiments. The data 223 may include various data collected by the electronic device 20.
Further, the embodiment of the application also discloses a storage medium, wherein the storage medium stores a computer program, and the computer program realizes the steps of the song searching method disclosed in any one of the previous embodiments when being loaded and executed by a processor.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A song retrieval method, comprising:
acquiring target song retrieval information;
Performing normalization processing on each target search word in the target song search information to obtain a target music feature vector of each target search word, wherein the target music feature vector is used for representing the characteristics of the target search word in the music field;
Inputting each target search term into a language neural network model, and obtaining the target front-rear relationship of each target search term in the target song search information;
determining target word weights of the target search words based on the target music feature vectors and the target front-rear relations of the target search words;
Determining a song retrieval result corresponding to the target song retrieval information based on the target retrieval word and the corresponding target word weight;
Wherein the determining the target word weight of each target search word based on the target music feature vector and the target context of each target search word includes:
Vector splicing is carried out on the target music feature vector corresponding to the target retrieval word and the target front-rear relation through a predetermined word weight calculation formula, and a corresponding target vector splicing result is obtained;
determining the target word weight of the target search word based on the target vector splicing result;
The word weight calculation formula comprises:
θ=Wa1*α+Wa2*β+b;
Wherein θ represents the target vector concatenation result; alpha represents numerical information corresponding to the target music feature vector; w a1 represents a splicing weight value corresponding to the target music feature vector; beta represents numerical information corresponding to the target context; w a2 represents a splicing weight value corresponding to the target context; b represents a preset constant value.
2. The method of claim 1, wherein the target music feature vector comprises at least one of an initial word weight, a named entity recognition feature, and a compactness;
wherein the initial word weight comprises a word weight determined based on a query-qanchor method; the named entity identification feature is used for representing the category of the target search term in the music field; the compactness is used for representing the co-occurrence probability of the target retrieval word in the music field.
3. The method of claim 1, wherein the inputting each target term into the language neural network model to obtain the target context of each target term in the target song search information comprises:
determining word embedding characteristic information of each target search word;
inputting the word embedded feature information into the language neural network model, and obtaining deep features corresponding to the word embedded feature information;
And taking the deep feature as the target context of the corresponding target search term.
4. The method of claim 1, wherein the determining the target word weight for the target term based on the target vector concatenation result comprises:
and inputting the target vector splicing result into a target neural network model to acquire the target word weight of the target search word.
5. The method of claim 1, wherein the determining a song search result corresponding to the target song search information based on the target search term and the corresponding target term weight thereof comprises:
Judging whether the weight of the target word is larger than a preset numerical value or not;
If the target word weight is larger than the preset numerical value, classifying the target search word corresponding to the target word weight as a target necessary word;
If the target word weight is smaller than or equal to the preset numerical value, classifying the target search word corresponding to the target word weight as a target unnecessary word;
And searching the songs based on the target necessary stay word to obtain the song searching result.
6. The method of claim 5, wherein the determining of the word weight calculation formula comprises:
Acquiring training song retrieval information;
Determining training music feature vectors of all training search words in the training song search information;
inputting each training search term into the language neural network model, and acquiring the training front-back relation of each training search term in the training song search information;
Acquiring an initial word weight calculation formula;
Based on the initial word weight calculation formula, vector splicing is carried out on the training music feature vector corresponding to the training retrieval word and the relation before and after training, and a corresponding training vector splicing result is obtained;
Determining training word weights of the training search words based on the training vector splicing results;
Judging whether the weight of the training word is larger than the preset value; if the training word weight is larger than the preset numerical value, classifying the training search word corresponding to the training word weight as a training necessary word; if the training word weight is smaller than or equal to the preset numerical value, classifying the training search word corresponding to the training word weight as a training unnecessary word;
determining a loss value of the word weight calculation formula based on the training obligatory word and the training unnecessary word;
And adjusting the word weight calculation formula based on the loss value until the word weight calculation formula trained in advance is obtained.
7. The method of claim 6, wherein the determining the loss value of the word weight calculation formula based on the training must-leave word and the training not-leave word comprises:
determining a loss value of the word weight calculation formula based on the training obligatory word and the training unnecessary word by using the loss value calculation formula;
the loss value calculation formula includes:
Loss(query)=MSE(Final(terma1))+Softmax(Final(terma2));
Wherein Loss (query) represents the Loss value; final (term a1) represents the training word weight of the training must-leave word; MSE represents the mean square error function; final (term a2) represents the training word weight of the unnecessary word; softmax represents a logistic regression function.
8. An electronic device, comprising:
a memory for storing a computer program;
A processor for implementing the steps of the song retrieval method according to any one of claims 1 to 7 when executing the computer program.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the steps of the song retrieval method according to any one of claims 1 to 7.
CN202110741923.2A 2021-06-30 2021-06-30 Song retrieval method, electronic equipment and computer readable storage medium Active CN113377997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110741923.2A CN113377997B (en) 2021-06-30 2021-06-30 Song retrieval method, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110741923.2A CN113377997B (en) 2021-06-30 2021-06-30 Song retrieval method, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113377997A CN113377997A (en) 2021-09-10
CN113377997B true CN113377997B (en) 2024-06-18

Family

ID=77580344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110741923.2A Active CN113377997B (en) 2021-06-30 2021-06-30 Song retrieval method, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113377997B (en)

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2823761B2 (en) * 1992-12-24 1998-11-11 シャープ株式会社 Document search device
JP2000035965A (en) * 1998-07-17 2000-02-02 Nippon Telegr & Teleph Corp <Ntt> Method and device for retrieving similar feature quantity and storage medium storing retrieval program of similar feature quantity
JP6612293B2 (en) * 2017-06-30 2019-11-27 日本電信電話株式会社 Document search device, word presentation device, method and program thereof
CN110019658B (en) * 2017-07-31 2023-01-20 腾讯科技(深圳)有限公司 Method and related device for generating search term
CN108763191B (en) * 2018-04-16 2022-02-11 华南师范大学 Text abstract generation method and system
CN109947902B (en) * 2019-03-06 2021-03-26 腾讯科技(深圳)有限公司 Data query method and device and readable medium
CN110288980A (en) * 2019-06-17 2019-09-27 平安科技(深圳)有限公司 Audio recognition method, the training method of model, device, equipment and storage medium
CN110597949A (en) * 2019-08-01 2019-12-20 湖北工业大学 Court similar case recommendation model based on word vectors and word frequency
CN110598067B (en) * 2019-09-12 2022-10-21 腾讯音乐娱乐科技(深圳)有限公司 Word weight obtaining method and device and storage medium
CN110852112B (en) * 2019-11-08 2023-05-05 语联网(武汉)信息技术有限公司 Word vector embedding method and device
CN111198965B (en) * 2019-12-31 2024-04-19 腾讯科技(深圳)有限公司 Song retrieval method, song retrieval device, server and storage medium
CN111881316B (en) * 2020-07-28 2024-07-19 腾讯音乐娱乐科技(深圳)有限公司 Search method, search device, server and computer readable storage medium
CN112507724A (en) * 2020-12-03 2021-03-16 平安科技(深圳)有限公司 Word weight determination method, device, server and computer readable storage medium
CN112528646B (en) * 2020-12-07 2023-04-18 深圳市优必选科技股份有限公司 Word vector generation method, terminal device and computer-readable storage medium
CN112579792B (en) * 2020-12-22 2023-08-04 东北大学 PGAT and FTATT-based remote supervision relation extraction method
CN112580352B (en) * 2021-03-01 2021-06-04 腾讯科技(深圳)有限公司 Keyword extraction method, device and equipment and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于MT-LDA的音乐标签主题检索;徐芸芝;邵曦;;计算机技术与发展;20160525(第07期);全文 *
基于示例语义的音乐检索模型;秦静;山东大学学报(理学版);20170630;第52卷(第6期);全文 *

Also Published As

Publication number Publication date
CN113377997A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
WO2021139074A1 (en) Knowledge graph-based case retrieval method, apparatus, device, and storage medium
US9110922B2 (en) Joint embedding for item association
US20110029561A1 (en) Image similarity from disparate sources
WO2021226840A1 (en) Hot news intention recognition method, apparatus and device and readable storage medium
CN111708942B (en) Multimedia resource pushing method, device, server and storage medium
Zhang et al. A retrieval algorithm of encrypted speech based on short-term cross-correlation and perceptual hashing
Elshater et al. godiscovery: Web service discovery made efficient
EP2707808A2 (en) Exploiting query click logs for domain detection in spoken language understanding
WO2023240878A1 (en) Resource recognition method and apparatus, and device and storage medium
US20210365805A1 (en) Estimating number of distinct values in a data set using machine learning
CN114328800A (en) Text processing method and device, electronic equipment and computer readable storage medium
CN115952770B (en) Data standardization processing method and device, electronic equipment and storage medium
CN113377997B (en) Song retrieval method, electronic equipment and computer readable storage medium
CN111984867A (en) Network resource determination method and device
CN112685623B (en) Data processing method and device, electronic equipment and storage medium
CN113792131B (en) Keyword extraction method and device, electronic equipment and storage medium
CN114117239A (en) House resource pushing method, device and equipment
CN112148902A (en) Data processing method, device, server and storage medium
CN111539208B (en) Sentence processing method and device, electronic device and readable storage medium
US11983209B1 (en) Partitioning documents for contextual search
CN115659945B (en) Standard document similarity detection method, device and system
CN112883232B (en) Resource searching method, device and equipment
AU2022204665B2 (en) Automated search and presentation computing system
US11836176B2 (en) System and method for automatic profile segmentation using small text variations
CN113220841B (en) Method, apparatus, electronic device and storage medium for determining authentication information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant