WO2021081945A1 - 一种文本分类方法、装置、电子设备及存储介质 - Google Patents
一种文本分类方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2021081945A1 WO2021081945A1 PCT/CN2019/114871 CN2019114871W WO2021081945A1 WO 2021081945 A1 WO2021081945 A1 WO 2021081945A1 CN 2019114871 W CN2019114871 W CN 2019114871W WO 2021081945 A1 WO2021081945 A1 WO 2021081945A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- semantic
- text
- network
- classification
- classified
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
Definitions
- the embodiments of the present application relate to computer technology, and in particular, to a text classification method, device, electronic device, and storage medium.
- Text classification refers to the automatic classification of text by electronic equipment according to a certain classification system or standard, and it is widely used in people's daily life. For example, for the recommendation business of electronic equipment, the electronic equipment is required to classify text based on a large amount of recommended content. For another example, when the electronic device is in the intelligent voice control function, the electronic device is required to classify the text content converted by the voice.
- the implementation of text classification is inseparable from the model, and the accuracy of text classification mainly depends on the model.
- This application provides a text classification method, device, electronic equipment and storage medium, which can improve the accuracy of text classification.
- an embodiment of the present application provides a text classification method, including:
- the semantic classification network includes a convolution layer and a classification layer with different hyperparameters
- classification processing is performed on the text to be classified at the classification layer to determine the text category of the text to be classified.
- an embodiment of the present application also provides a text classification device, including:
- the first obtaining module is used to obtain the text to be classified
- the first conversion module is configured to convert the text to be classified into a semantic matrix according to the semantic representation network of the pre-trained text classification model, wherein the text classification model is composed of the semantic representation network and the semantic classification network;
- the convolution operation module is used to perform convolution operations on the semantic matrix at the convolution layer of the semantic classification network to obtain semantic features of various sizes, wherein the semantic classification network includes convolutions with different hyperparameters Layer and classification layer;
- the classification module is configured to classify the text to be classified at the classification layer according to the semantic features of the multiple sizes, so as to determine the text category of the text to be classified.
- an embodiment of the present application also provides an electronic device, including: a processor, a memory, and a computer program stored in the memory and running on the processor.
- the processor implements text when the computer program is executed.
- the semantic classification network includes a convolution layer and a classification layer with different hyperparameters
- classification processing is performed on the text to be classified at the classification layer to determine the text category of the text to be classified.
- an embodiment of the present application also provides a storage medium containing executable instructions of an electronic device.
- the executable instructions of the electronic device are executed by an electronic device processor, the text classification described in the embodiments of the present application is executed. method.
- FIG. 1 is a schematic diagram of the first flow of a text classification method provided by an embodiment of the present application.
- Fig. 2 is a schematic structural diagram of a text classification model provided by an embodiment of the present application.
- Fig. 3 is a schematic diagram of the first structure of a semantic classification network provided by an embodiment of the present application.
- Fig. 4 is a schematic diagram of a second structure of a semantic classification network provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of a second process of a text classification method provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of the third process of a text classification method provided by an embodiment of the present application.
- FIG. 7 is a schematic diagram of the fourth process of a text classification method provided by an embodiment of the present application.
- Fig. 8 is a schematic structural diagram of a text classification device provided by an embodiment of the present application.
- FIG. 9 is a first structural schematic diagram of an electronic device provided by an embodiment of the present application.
- FIG. 10 is a schematic diagram of a second structure of an electronic device provided by an embodiment of the present application.
- the embodiment of the present application provides a text classification method, and the text classification method is applied to an electronic device.
- the execution subject of the text classification method may be the text classification device provided in the embodiment of the present application, or an electronic device integrated with the text classification device.
- the text classification device may be implemented in hardware or software, and the electronic device may be a smart device. Mobile phones, tablet computers, handheld computers, notebook computers, or desktop computers are equipped with processors and have processing capabilities.
- FIG. 1 is a schematic diagram of the first process of a text classification method provided by an embodiment of this application.
- the text classification method is applied to the electronic device provided in the embodiment of the present application.
- the process of the text classification method provided in the embodiment of the present application may be as follows:
- the text to be classified is an object used for text classification.
- the length of the text to be classified is not specifically limited in the embodiments of the present application.
- the text to be classified can be a sentence, a paragraph, an article, and so on.
- the embodiment of the present application does not specifically limit it.
- the text to be classified may be Chinese text, English text, Japanese text, etc.
- the electronic device can obtain the text to be classified according to the user's selection instruction.
- the stored document 1 is used as the text to be classified.
- the seventh paragraph in document 1 is used as the text to be classified.
- the electronic device may obtain the text to be classified through an image, where the image carries text information.
- the electronic device acquires an image through a camera, and the image carries text information of "Lu from tonight is white, and the moon is the hometown Ming", and then text recognition is performed on the acquired image to obtain the text to be classified, that is, the text to be classified is "Lu From white tonight, the moon is the hometown of Ming.”
- FIG. 2 is a schematic structural diagram of a text classification model provided by an embodiment of the application.
- the text classification model is composed of a semantic representation network and a semantic classification network.
- the semantic representation network is mainly used to transform text.
- Semantic classification network is mainly used to classify text. It should be noted that the semantic classification network takes the output of the semantic representation network as input.
- the semantic matrix is obtained by combining the semantic vectors of the characters in the text to be classified.
- the number of rows of the semantic matrix is equal to the number of characters in the text to be classified, and the number of columns is equal to the dimension of the semantic vector of each character.
- the number of rows of the semantic matrix is equal to the dimension of the semantic vector of each character, and the number of columns is equal to the number of characters of the text to be classified. It is understandable that the dimension of the semantic vector of each character mainly depends on the dictionary in the semantic representation network.
- the semantic vector of "chun” is (X11, X12, X13), the semantic vector of " ⁇ ” is (X21, X22, X23), and the semantic vector of "to” is ( X31, X32, X33), the semantic vector of "le” is (X41, X42, X43), then the semantic matrix of "spring is here" is as follows:
- the electronic device after obtaining the text to be classified, inputs the text to be classified into the semantic representation network of the pre-trained text classification model, and outputs the semantic matrix of the text to be classified.
- the electronic device After converting the text to be classified into a semantic matrix, the electronic device passes multiple convolution kernels of different sizes and the corresponding volumes of the multiple convolution kernels in the convolutional layer of the semantic classification network.
- the product step size performs convolution operation on the semantic matrix to obtain semantic features of various sizes. It should be noted that the convolution operation in this scheme is a one-dimensional convolution operation.
- the semantic classification network includes a convolutional layer and a classification layer with different hyperparameters, and the semantic classification network may also include an input layer and an output layer.
- the hyperparameters include the convolution step size, the convolution kernel size, and the padding size.
- the semantic feature size obtained is mainly determined by the hyperparameters. Assuming that the size of the semantic matrix is N1 ⁇ N2, where N1 refers to the number of characters in the text to be classified, and N2 refers to the dimension of the semantic vector of each character, the semantic feature size calculation formula is as follows:
- M represents the semantic feature size
- N1 represents the number of rows of the semantic matrix
- P represents the padding size
- S represents the convolution step size
- the convolution kernel size is F1 ⁇ F2. It should be noted that the padding size is adjusted according to the convolution kernel size and the convolution step size.
- F2 in the convolution kernel size is equal to N2.
- the semantic feature size calculation formula is as follows:
- M represents the semantic feature size
- N4 represents the number of columns of the semantic matrix
- P represents the padding size
- S represents the convolution step size
- the convolution kernel size is F3 ⁇ F4. It should be noted that the padding size is adjusted according to the convolution kernel size and the convolution step size.
- F3 in the convolution kernel size is equal to N3.
- the convolutional layer outputs semantic features of various sizes.
- the convolution layer can output semantic features of various sizes.
- the semantic matrix is 100 ⁇ 100
- Convolution kernels of different sizes correspond to different receptive fields. For example, a larger convolution kernel has a larger receptive field than a smaller convolution kernel and can extract richer information. Therefore, in this example, the feature extraction is performed through two convolution kernels, one large and one small, so that the obtained overall semantic features contain richer information, which can improve the accuracy of text classification.
- the convolution layer can output semantic features of various sizes.
- the semantic matrix is 100 ⁇ 100
- the step size can also be further adjusted. While enriching the semantic features of the text to be classified, the dimensionality of the features is reduced to improve the computational efficiency of the network.
- FIG. 3 is a first structural schematic diagram of a semantic classification network provided by an embodiment of this application.
- One convolutional layer in the semantic classification network may include multiple subconvolutional layers, where each subconvolutional layer has different hyperparameters. For example, the sizes of the convolution kernels of multiple sub-convolutional layers are different, and the convolution step lengths of multiple sub-convolutional layers are different.
- the electronic device can perform convolution operations on the semantic matrix in multiple sub-convolutional layers of the same convolutional layer at the same time to obtain semantic features of various sizes. Among them, a convolution operation is performed on the semantic matrix based on a sub-convolution layer to obtain a semantic feature of a size.
- the semantic classification network in this embodiment may have multiple convolutional layers, where each convolutional layer is composed of multiple subconvolutional layers, and the sizes of the convolution kernels of the multiple subconvolutional layers are different. .
- FIG. 4 is a schematic diagram of a second structure of a semantic classification network provided by an embodiment of this application.
- a convolutional layer of the semantic classification network can have multiple convolution kernels, and each convolution kernel can perform convolution operations according to its corresponding convolution step size and padding size.
- the electronic device may perform convolution operations on the semantic matrix through multiple convolution kernels of different sizes and the respective convolution step lengths of the multiple convolution kernels in the convolution layer to obtain semantic features of various sizes.
- a convolution operation is performed on the semantic matrix to obtain a semantic feature of a size.
- the semantic classification network in this embodiment can have multiple convolutional layers, where each convolutional layer can have multiple convolution kernels, and each convolution kernel can be based on its corresponding convolution step size. And fill size for convolution operation.
- the electronic device can classify the text to be classified at the classification layer of the semantic classification network according to the semantic features of various sizes to determine the text category of the text to be classified. For example, referring to Figure 3, in the classification, the semantic features output by the first convolutional layer and the second convolutional layer are combined to determine the category label of the text to be classified in the classification layer of the semantic classification network to determine the text to be classified. Text category.
- the electronic device After the electronic device obtains the text to be classified, it converts the text to be classified into a semantic matrix according to the semantic representation network of the pre-trained text classification model, and then converts the text to be classified into a semantic matrix in the semantic classification network of the text classification model.
- the convolution layer performs convolution operations on the semantic matrix to obtain semantic features of various sizes.
- the semantic classification network includes convolution layers with different hyperparameters.
- the classification layer of the semantic classification network Perform classification processing on the text to be classified to determine the text category of the text to be classified.
- the convolution layer based on different hyperparameters obtains semantic features of various sizes, which can enrich the semantic features of the text to be classified and prevent the low accuracy of text classification caused by the lack of semantic features of the text to be classified. Thereby improving the accuracy of text classification.
- FIG. 5 is a schematic diagram of a second process of a text classification method provided by an embodiment of this application.
- 102 may include 1021 and 1022, as follows:
- the semantic vector of each character is combined into a semantic matrix.
- the electronic device converts each character in the text to be classified into a semantic vector according to the semantic representation network of the pre-trained text classification model, where one character is converted into a semantic vector. After all the characters of the text to be classified are converted into semantic vectors, the semantic vectors of the characters are combined into a semantic matrix according to the sequence of the characters.
- the electronic device may remove invalid characters in the text to be classified.
- the invalid characters of the text to be classified include emoticons, space characters, garbled characters, etc.
- the semantic classification network further includes a pooling layer.
- the electronic device can perform pooling processing on the semantic features of each size in the pooling layer, and then perform the pooling process on the semantic features of each size according to the pooling layer. For the processed semantic features, the text to be classified is classified at the classification layer.
- the electronic device can use max pooling at the pooling layer to pool semantic features of each size.
- Electronic devices can also use k_maxpooling at the pooling layer to pool semantic features of each size.
- the electronic device divides the semantic features of each size into multiple groups, and obtains the first largest semantic feature, the second largest semantic feature in each group... the k-th largest semantic feature, that is, according to the size of the semantic feature , Get k semantic features from each group.
- the electronic device uses k_maxpooling at the pooling layer to pool semantic features of each size, which can obtain richer semantic features and improve the accuracy of text classification.
- 104 may include 1041 and 1042, as follows:
- the electronic device calculates the probability value of the text to be classified in each preset text category according to the semantic features of multiple sizes and the preset parameter matrix at the classification layer.
- the preset text category with the largest probability value is determined as the text category of the text to be classified.
- the number of preset text categories is not specifically limited in the embodiment of the present application, for example, the number of preset text categories is 30.
- the probability value of the text to be classified in a preset text category refers to the occurrence probability value of an event (the text to be classified is the preset text category).
- the number of preset text categories is equal to the number of probability values. It is understandable that the probability value calculated each time is greater than or equal to 0 and less than or equal to 1.
- S1 text category For example, suppose there are 4 preset text categories, denoted as S1 text category, S2 text category, S3 text category, S4 text category, calculate the probability value P1 of the text to be classified in the text category S1, and calculate the text to be classified in the text S2
- the probability value P2 on the category calculate the probability value P3 of the text to be classified in the S3 text category, calculate the probability value P4 of the text to be classified in the S4 text category, when the number of probability values obtained is equal to the preset text category, from the probability The value P1, the probability value P2, the probability value P3, and the probability value P4 are searched for the largest probability value, and the preset text category with the largest probability value is determined as the text category of the text to be classified.
- P1>P2>P3>P4 the preset text category of P1 (S1 text category) is the text category of the text to be classified.
- FIG. 6 is a schematic diagram of the third process of the text classification method provided by an embodiment of this application.
- 102 before 102, it also includes 105, 106, and 107, as follows:
- the semantic representation network convert the plurality of first training texts in the first training set into a plurality of first semantic matrices.
- the electronic device before converting the plurality of first training texts in the first training set into a plurality of first semantic matrices according to the semantic representation network, obtains the plurality of first training texts, Form the first training set.
- the electronic device can perform supervised training on the preset convolutional neural network, and then use the trained convolutional neural network as a semantic classification network, consisting of a semantic representation network and a semantic classification network Form a text classification model.
- the first training text in the first training set carries the target category label
- the target category label is manually set by the user
- the first training text One-to-one correspondence with the target category label.
- the format of the first training text is as follows: "text content ⁇ target category label”.
- the format of the first training text is as follows: "target category label ⁇ text content” and so on.
- the electronic device converts the text content of each first training text in the first training set into a first semantic matrix according to the semantic representation network. After the text content of the plurality of first training texts in the first training set is converted, a plurality of first semantic matrices are obtained.
- the electronic device when converting the text content of each first training text in the first training set into a first semantic matrix, may convert the text content of the first training text into a semantic vector according to the semantic representation network. Then, based on the sequence of each character in the text content, the semantic vector of each character is combined into the first semantic matrix.
- the first training text is "Celebrating the 70th Anniversary of the Founding of the People’s Republic of China ⁇ National Day”
- the text content of the first training text (“ "Celebrating the 70th Anniversary of the Founding of the People’s Republic of China") each character is represented by a semantic vector
- the semantic vector of each character is combined into the first semantic matrix according to the sequence of each character in the text content.
- the dimension of the semantic vector can be a dimension greater than or equal to 3.
- the first semantic matrix can be seen in the above expression.
- the dimension of the semantic vector can be 6 dimensions, that is, the number of components in the semantic vector is 6.
- the semantic vector of " ⁇ " is (A011, A012, A013, A014, A015, A016)
- the semantic vector of " ⁇ ” is (A021, A022, A023, A024, A025, A026), and so on.
- the electronic device converts the target category label of each first training text in the first training set into a third semantic matrix according to the semantic representation network. After the target category labels of the multiple first training texts in the first training set are converted, multiple third semantic matrices are obtained.
- the electronic device when converting the target category label of each first training text in the first training set into a third semantic matrix, can convert the target category label of each first training text into semantics according to the semantic representation network vector. Then, based on the sequence of each character in the target category label, the semantic vector of each character is combined into a third semantic matrix.
- each character in the target category label ("National Day") of the first training text is represented by a semantic vector
- each character is represented by a semantic vector in the target category label.
- the semantic vector of each character is combined into a third semantic matrix.
- the convolutional neural network After converting the plurality of first training texts in the first training set into a plurality of first semantic matrices and a plurality of third semantic matrices, training pre-training based on the plurality of first semantic matrices and a plurality of third semantic matrices
- the convolutional neural network the trained convolutional neural network is used as the semantic classification network
- the text classification model is formed by the semantic representation network and the semantic classification network.
- the electronic device may perform calculations on the preset convolutional neural networks based on multiple first semantic matrices and preset loss functions.
- the network is trained iteratively until it converges.
- the preset loss function is not specifically limited in the embodiment of the present application, for example, the preset loss function is a cross-entropy loss function.
- the electronic device inputs a plurality of first semantic matrices into a preset convolutional neural network, and outputs the probability value of the first training text corresponding to each first semantic matrix in each preset text category.
- the electronic device may perform iterative training on the preset convolutional neural network based on multiple first semantic matrices and preset loss functions until the loss value of the preset loss function is the smallest and the accuracy of the convolutional neural network tends to Yu stable.
- multiple verification texts are obtained to form a verification set, and the accuracy of the convolutional neural network is calculated through the verification set. If the accuracy becomes stable, stop training, and use the trained convolutional neural network as a semantic classification network. If the accuracy rate has not stabilized, adjust the model parameters of the convolutional neural network, and continue to train the convolutional neural network.
- the electronic device when the electronic device performs iterative training on the preset convolutional neural network, each time the model parameters are adjusted, it will reuse multiple first semantic matrices to train the preset convolutional neural network and calculate it through the validation set.
- the current accuracy of the convolutional neural network Compare the current accuracy rate with the saved historical accuracy rate. If the current accuracy rate is greater than the historical accuracy rate, delete the model parameters corresponding to the historical accuracy rate, and save the current accuracy rate and the model parameters corresponding to the current accuracy rate. If the current accuracy rate is accurate If the rate is less than or equal to the historical accuracy rate, the current accuracy rate is saved, but the model parameters corresponding to the current accuracy rate are not saved. If the accuracy rate obtained multiple times does not increase, the training ends.
- the preset convolutional neural network is trained based on the plurality of first semantic matrices, only the model parameters of the convolutional neural network are updated, and the model parameters of the semantic representation network are not changed.
- the text classification model constructed by supervised training of the convolutional neural network helps to improve the accuracy of text classification.
- FIG. 7 is a schematic diagram of the fourth process of the text classification method provided by an embodiment of the application.
- 106 before 106, it also includes 108 and 109, as follows:
- the semantic representation network of the text classification model is a BERT network after fine-tuning training.
- the electronic device Before converting the plurality of first training texts in the first training set into a plurality of first semantic matrices according to the semantic representation network, the electronic device may obtain a plurality of second training texts to form a second training set . Use the second training set to perform fine-tuning training on the BERT network to update the model parameters of the BERT network.
- the first training text and the second training text belong to the same type of information, but the first training text is different from the second training text.
- the first training text is used to train the preset convolutional neural network to obtain the semantic classification network of the text classification model
- the second training text is used to fine-tune the training of the BERT network to obtain the semantic representation network of the text classification model.
- the BERT network in this scheme is a multi-layer bidirectional encoder.
- the BERT network includes 12 transformer layers, and each transformer layer includes 4 structures: self-attention, regularization, full connection, and regularization.
- the semantic representation network in the text classification model in this solution uses the get_sequence_output function, the output of the semantic representation network is a semantic matrix composed of the semantic vectors of characters. Compared with outputting a semantic matrix composed of semantic vectors of words, outputting a semantic matrix composed of semantic vectors of characters can improve the classification accuracy of short texts.
- this solution trains the semantic representation network and the semantic classification network separately, that is, the electronic device first fine-tunes and trains the semantic representation network, and then trains the semantic classification network to obtain a text classification model with excellent training effects, which can improve the text classification performance Accuracy.
- this application is not limited by the order of execution of the various steps described, and certain steps may also be performed in other order or at the same time if there is no conflict.
- acquiring multiple pieces of first training text to form the first training set and using the second training set to train the BERT network may be performed at the same time.
- 107 includes 1071 and 1072:
- the electronic device simultaneously trains a preset convolutional neural network through multiple first semantic matrices obtained based on the first training text and multiple second semantic matrices obtained based on the second training text. That is, the second training text is not only used to fine-tune the training of the BERT network, but also used to train the preset convolutional neural network.
- this solution uses migration learning when training convolutional neural networks, such as using multiple second semantic matrices obtained when fine-tuning the training of the BERT network to train the preset convolutional neural network, which can effectively prevent The obtained text classification model is over-fitted, which improves the accuracy of text classification.
- the method before converting the plurality of first training texts in the first training set into a plurality of first semantic matrices according to the semantic representation network, the method further includes:
- the semantic representation network of the text classification model is a BERT network. Any two of the semantic representation network, the baseline BERT network used for fine-tuning training, and the source BERT network are not the same network, but of the same type The internet. Before converting the plurality of first training texts in the first training set into a plurality of first semantic matrices according to the semantic representation network, use transfer learning to determine the model parameters of the semantic representation network in the text classification model .
- the electronic device may obtain the model parameters of the source BERT network, in the baseline Load the model parameters of the source BERT network in the BERT network, and obtain multiple third training texts to form the third training set. Then use the third training set to fine-tune the baseline BERT network to update the model parameters of the baseline BERT network. Then load the baseline BERT network to fine-tune the updated model parameters after training in the BERT network of the text classification model.
- the first training text, the second training text, and the third training text belong to the same type of information
- the third training text is different from the first training text
- the third training text can be different from the second training text or the third training text. It can also be the same as the second training text.
- Fig. 8 is a schematic structural diagram of a text classification device provided by an embodiment of the present application.
- the device is used to execute the text classification method provided in the foregoing embodiment, and has functional modules and beneficial effects corresponding to the execution method.
- the text classification device 200 specifically includes: a first acquisition module 201, a first conversion module 202, a convolution operation module 203, and a classification module 204, wherein:
- the first obtaining module 201 is used to obtain the text to be classified
- the first conversion module 202 is configured to convert the text to be classified into a semantic matrix according to the semantic representation network of the pre-trained text classification model, where the text classification model is composed of the semantic representation network and the semantic classification network;
- the convolution operation module 203 is configured to perform a convolution operation on the semantic matrix in the convolution layer of the semantic classification network to obtain semantic features of various sizes, wherein the semantic classification network includes convolutions with different hyperparameters. Build-up layer and classification layer;
- the classification module 204 is configured to classify the text to be classified at the classification layer according to the semantic features of the multiple sizes, so as to determine the text category of the text to be classified.
- the first conversion module 202 when converting the text to be classified into a semantic matrix according to the semantic representation network of the pre-trained text classification model, the first conversion module 202 may be used to:
- the semantic vector of each character is combined into a semantic matrix.
- the classification module 204 may be used to:
- the preset text category with the largest probability value is determined as the text category of the text to be classified.
- the text classification device 200 further includes a pooling processing module, the pooling processing module is used to: in the pooling layer, the semantics of each size The features are pooled; the classification module 204 is further configured to: perform classification processing on the text to be classified at the classification layer according to the semantic features of the pooling processing.
- the text classification apparatus 200 before converting the text to be classified into a semantic matrix according to the semantic representation network of the pre-trained text classification model, the text classification apparatus 200 further includes a removal module for removing the text to be classified. Invalid characters in classified text.
- the text classification apparatus 200 before acquiring the text to be classified, the text classification apparatus 200 further includes a second acquisition module, a second conversion module, and a first training module:
- the second acquisition module is used to acquire a plurality of first training texts to form a first training set
- the second conversion module is configured to convert the plurality of first training texts in the first training set into a plurality of first semantic matrices according to the semantic representation network;
- the first training module is configured to train a preset convolutional neural network based on the plurality of first semantic matrices, and use the trained convolutional neural network as the semantic classification network, which is represented by the semantics
- the network and the semantic classification network constitute the text classification model.
- the text classification apparatus 200 before converting the plurality of first training texts in the first training set into a plurality of first semantic matrices according to the semantic representation network, the text classification apparatus 200 further includes a third acquisition module And the second training module:
- the third acquisition module is used to acquire a plurality of second training texts to form a second training set
- the second training module is configured to use the second training set to train the BERT network to update the model parameters of the BERT network.
- the first training module when training a preset convolutional neural network based on the plurality of first semantic matrices, is further configured to:
- the first training module when training a preset convolutional neural network based on the plurality of first semantic matrices, the first training module may be used to:
- the first acquisition module 201 acquires the text to be classified, and then the first conversion module 202 converts the text to be classified according to the semantic representation network of the pre-trained text classification model Is a semantic matrix, and then the convolution operation module 203 performs a convolution operation on the semantic matrix on the convolution layer of the semantic classification network of the text classification model to obtain semantic features of various sizes, wherein the semantic classification network includes Convolutional layer and classification layer of hyperparameters.
- the classification module 204 classifies the text to be classified in the classification layer according to the semantic features of the multiple sizes to determine the text category of the text to be classified,
- the semantic features of the text to be classified can be enriched, and the low accuracy of text classification caused by the few semantic features of the text to be classified can be prevented, thereby improving the accuracy of text classification.
- the text classification device provided in this embodiment of the application belongs to the same concept as the text classification method in the above embodiment. Any method provided in the text classification method embodiment can be run on the text classification device, and its specific implementation For details of the process, refer to the embodiment of the text classification method, which will not be repeated here.
- the embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
- the storage medium may be a magnetic disk, an optical disk, a read only memory (Read Only Memory, ROM,), or a random access device (Random Access Memory, RAM), etc.
- the electronic device 300 includes a processor 301 and a memory 302. Wherein, the processor 301 and the memory 302 are electrically connected.
- the processor 301 is the control center of the electronic device 300. It uses various interfaces and lines to connect the various parts of the entire electronic device. It executes the electronic device by running or loading the computer program stored in the memory 302 and calling the data stored in the memory 302. Various functions of the device 300 and processing data.
- the memory 302 may be used to store software programs and modules.
- the processor 301 executes various functional applications and data processing by running the computer programs and modules stored in the memory 302.
- the memory 302 may mainly include a storage program area and a storage data area.
- the storage program area may store an operating system, a computer program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of electronic equipment, etc.
- the memory 302 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the memory 302 may also include a memory controller to provide the processor 301 with access to the memory 302.
- the processor 301 in the electronic device 300 will load the instructions corresponding to the process of one or more computer programs into the memory 302 according to the following steps, and run the instructions by the processor 301 and store them in the memory 302.
- the processor 301 in the electronic device 300 will load the instructions corresponding to the process of one or more computer programs into the memory 302 according to the following steps, and run the instructions by the processor 301 and store them in the memory 302.
- the semantic classification network includes a convolution layer and a classification layer with different hyperparameters
- classification processing is performed on the text to be classified at the classification layer to determine the text category of the text to be classified.
- FIG. 10 is a schematic diagram of a second structure of an electronic device provided by an embodiment of the application.
- the electronic device further includes a camera component 303, a radio frequency circuit 304, an audio circuit 305, and Power supply 306.
- the camera component 303, the radio frequency circuit 304, the audio circuit 305, and the power supply 306 are electrically connected to the processor 301, respectively.
- the camera component 303 may include an image processing circuit, which may be implemented by hardware and/or software components, and may include various processing units that define an image signal processing (Image Signal Processing) pipeline.
- the image processing circuit may at least include: multiple cameras, an image signal processor (Image Signal Processor, ISP processor), a control logic, an image memory, a display, and the like.
- Each camera may include at least one or more lenses and image sensors.
- the image sensor may include a color filter array (such as a Bayer filter). The image sensor can obtain the light intensity and wavelength information captured with each imaging pixel of the image sensor, and provide a set of raw image data that can be processed by the image signal processor.
- the radio frequency circuit 304 may be used to transmit and receive radio frequency signals to establish wireless communication with network equipment or other electronic equipment through wireless communication, and to transmit and receive signals with the network equipment or other electronic equipment.
- the audio circuit 305 can be used to provide an audio interface between the user and the electronic device through a speaker or a microphone.
- the power supply 306 can be used to power various components of the electronic device 300.
- the power supply 306 may be logically connected to the processor 301 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
- the processor 301 in the electronic device 300 will load the instructions corresponding to the process of one or more computer programs into the memory 302 according to the following steps, and run the instructions by the processor 301 and store them in the memory 302.
- the processor 301 in the electronic device 300 will load the instructions corresponding to the process of one or more computer programs into the memory 302 according to the following steps, and run the instructions by the processor 301 and store them in the memory 302.
- the semantic classification network includes a convolution layer and a classification layer with different hyperparameters
- classification processing is performed on the text to be classified at the classification layer to determine the text category of the text to be classified.
- the processor 301 may execute:
- the semantic vector of each character is combined into a semantic matrix.
- the processor 301 when performing classification processing on the text to be classified at the classification layer according to the semantic features of the multiple sizes, the processor 301 may execute:
- the preset text category with the largest probability value is determined as the text category of the text to be classified.
- the semantic classification network further includes a pooling layer. After obtaining semantic features of various sizes, the processor 301 may execute:
- pooling is performed on semantic features of each size
- the processor 301 may execute:
- the text to be classified is classified at the classification layer.
- the processor 301 may execute:
- the processor 301 may execute:
- the semantic representation network converting the plurality of first training texts in the first training set into a plurality of first semantic matrices
- the semantic representation network is a BERT network; before converting the plurality of first training texts in the first training set into a plurality of first semantic matrices according to the semantic representation network, 301 can execute:
- the second training set is used to train the BERT network to update the model parameters of the BERT network.
- the processor 301 may execute:
- the processor 301 may execute:
- the electronic device After acquiring the text to be classified, the electronic device provided in this embodiment converts the text to be classified into a semantic matrix according to the semantic representation network of the pre-trained text classification model, and then converts the text to be classified into a semantic matrix.
- the convolution layer performs convolution operations on the semantic matrix to obtain semantic features of various sizes.
- the semantic classification network includes convolution layers with different hyperparameters.
- the classification layer of the semantic classification network Performing classification processing on the text to be classified to determine the text category of the text to be classified can enrich the semantic features of the text to be classified, prevent low text classification accuracy caused by the lack of semantic features of the text to be classified, thereby improving the accuracy of text classification.
- the embodiments of the present application also provide a storage medium that stores a computer program, and when the computer program is run on a computer, the computer is caused to execute the text classification method in any of the above embodiments, such as: obtaining the text to be classified According to the semantic representation network of the pre-trained text classification model, the text to be classified is converted into a semantic matrix, wherein the text classification model is composed of the semantic representation network and the semantic classification network; in the semantic classification network
- the convolution layer performs a convolution operation on the semantic matrix to obtain semantic features of various sizes, wherein the semantic classification network includes a convolution layer and a classification layer with different hyperparameters; according to the semantic features of the multiple sizes , Performing classification processing on the text to be classified at the classification layer to determine the text category of the text to be classified.
- the storage medium may be a magnetic disk, an optical disk, a read only memory (Read Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
- the computer program can be stored in a computer readable storage medium, such as stored in the memory of an electronic device, and executed by at least one processor in the electronic device.
- the execution process can include the implementation of a text classification method.
- the storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, and the like.
- the text classification device of the embodiment of the present application its functional modules may be integrated in one processing chip, or each module may exist alone physically, or two or more modules may be integrated in one module.
- the above-mentioned integrated modules can be implemented in the form of hardware or software function modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium, such as a read-only memory, a magnetic disk, or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (20)
- 一种文本分类方法,其中,所述方法包括:获取待分类文本;根据预先训练的文本分类模型的语义表征网络,将所述待分类文本转换为语义矩阵,其中,所述文本分类模型由所述语义表征网络和语义分类网络构成;在所述语义分类网络的卷积层对所述语义矩阵进行卷积运算,得到多种尺寸的语义特征,其中,所述语义分类网络包括具有不同超参数的卷积层和分类层;根据所述多种尺寸的语义特征,在所述分类层对所述待分类文本进行分类处理,以确定所述待分类文本的文本类别。
- 根据权利要求1所述的文本分类方法,其中,所述根据预先训练的文本分类模型的语义表征网络,将所述待分类文本转换为语义矩阵,包括:根据预先训练的文本分类模型的语义表征网络,将所述待分类文本中的各字符转换为语义向量;基于各字符的先后顺序,将各字符的语义向量组合为语义矩阵。
- 根据权利要求1所述的文本分类方法,其中,所述根据所述多种尺寸的语义特征,在所述分类层对所述待分类文本进行分类处理,包括:在所述分类层根据所述多种尺寸的语义特征,计算所述待分类文本在每个预设文本类别上的概率值;将概率值最大的预设文本类别确定为所述待分类文本的文本类别。
- 根据权利要求1所述的文本分类方法,其中,所述语义分类网络还包括池化层,所述得到多种尺寸的语义特征之后,还包括:在所述池化层,对每一种尺寸的语义特征进行池化处理;所述根据所述多种尺寸的语义特征,在所述分类层对所述待分类文本进行分类处理,包括:根据池化处理的语义特征,在所述分类层对所述待分类文本进行分类处理。
- 根据权利要求1所述的文本分类方法,其中,所述根据预先训练的文本分类模型的语义表征网络,将所述待分类文本转换为语义矩阵之前,还包括:去除所述待分类文本中的无效字符。
- 根据权利要求1所述的文本分类方法,其中,所述获取待分类文本之前, 还包括:获取多条第一训练文本,构成第一训练集;根据所述语义表征网络,将所述第一训练集中的所述多条第一训练文本转换为多个第一语义矩阵;基于所述多个第一语义矩阵训练预设的卷积神经网络,并将训练后的所述卷积神经网络作为所述语义分类网络,由所述语义表征网络和语义分类网络构成所述文本分类模型。
- 根据权利要求6所述的文本分类方法,其中,所述语义表征网络为BERT网络;所述根据所述语义表征网络,将所述第一训练集中的所述多条第一训练文本转换为多个第一语义矩阵之前,还包括:获取多条第二训练文本,构成第二训练集;使用所述第二训练集对所述BERT网络进行训练,以更新所述BERT网络的模型参数。
- 根据权利要求7所述的文本分类方法,其中,所述基于所述多个第一语义矩阵训练预设的卷积神经网络,包括:获取使用所述第二训练集训练所述BERT网络时得到的多个第二语义矩阵;基于所述多个第一语义矩阵和所述多个第二语义矩阵,训练预设的卷积神经网络。
- 根据权利要求6所述的文本分类方法,其中,所述语义表征网络为BERT网络;所述根据所述语义表征网络,将所述第一训练集中的所述多条第一训练文本转换为多个第一语义矩阵之前,还包括:获取源生BERT网络的模型参数,在基线BERT网络中加载所述源生BERT网络的模型参数;获取多条第三训练文本,构成第三训练集;使用所述第三训练集对所述基线BERT网络进行训练,以更新所述基线BERT网络的模型参数;在所述语义表征网络中加载所述基线BERT网络更新后的模型参数。
- 根据权利要求6所述的文本分类方法,其中,所述基于所述多个第一语义矩阵训练预设的卷积神经网络,包括:基于所述多个第一语义矩阵和预设的损失函数,对预设的卷积神经网络进行迭代训练直至收敛。
- 一种文本分类装置,其中,包括:第一获取模块,用于获取待分类文本;第一转换模块,用于根据预先训练的文本分类模型的语义表征网络,将所述待分类文本转换为语义矩阵,其中,所述文本分类模型由所述语义表征网络和语义分类网络构成;卷积运算模块,用于在所述语义分类网络的卷积层对所述语义矩阵进行卷积运算,得到多种尺寸的语义特征,其中,所述语义分类网络包括具有不同超参数的卷积层和分类层;分类模块,用于根据所述多种尺寸的语义特征,在所述分类层对所述待分类文本进行分类处理,以确定所述待分类文本的文本类别。
- 一种电子设备,包括:处理器、存储器以及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现文本分类方法:获取待分类文本;根据预先训练的文本分类模型的语义表征网络,将所述待分类文本转换为语义矩阵,其中,所述文本分类模型由语义表征网络和语义分类网络构成;在所述语义分类网络的卷积层对所述语义矩阵进行卷积运算,得到多种尺寸的语义特征,其中,所述语义分类网络包括具有不同超参数的卷积层和分类层;根据所述多种尺寸的语义特征,在所述分类层对所述待分类文本进行分类处理,以确定所述待分类文本的文本类别。
- 根据权利要求12所述的电子设备,其中,在所述根据预先训练的文本分类模型的语义表征网络,将所述待分类文本转换为语义矩阵时,所述处理器用于执行:根据预先训练的文本分类模型的语义表征网络,将所述待分类文本中的各字符转换为语义向量;基于各字符的先后顺序,将各字符的语义向量组合为语义矩阵。
- 根据权利要求12所述的电子设备,其中,在所述根据所述多种尺寸 的语义特征,在所述分类层对所述待分类文本进行分类处理时,所述处理器用于执行:在所述分类层根据所述多种尺寸的语义特征,计算所述待分类文本在每个预设文本类别上的概率值;将概率值最大的预设文本类别确定为所述待分类文本的文本类别。
- 根据权利要求12的电子设备,其中,所述语义分类网络还包括池化层,在所述得到多种尺寸的语义特征之后,所述处理器用于执行:在所述池化层,对每一种尺寸的语义特征进行池化处理;所述根据所述多种尺寸的语义特征,在所述分类层对所述待分类文本进行分类处理时,所述处理器用于执行:根据池化处理的语义特征,在所述分类层对所述待分类文本进行分类处理。
- 根据权利要求12所述的电子设备,其中,在所述获取待分类文本之前,所述处理器用于执行:获取多条第一训练文本,构成第一训练集;根据所述语义表征网络,将所述第一训练集中的所述多条第一训练文本转换为多个第一语义矩阵;基于所述多个第一语义矩阵训练预设的卷积神经网络,并将训练后的所述卷积神经网络作为所述语义分类网络,由所述语义表征网络和语义分类网络构成所述文本分类模型。
- 根据权利要求16所述的电子设备,其中,所述语义表征网络为BERT网络;在所述根据所述语义表征网络,将所述第一训练集中的所述多条第一训练文本转换为多个第一语义矩阵之前,所述处理器用于执行:获取多条第二训练文本,构成第二训练集;使用所述第二训练集对所述BERT网络进行训练,以更新所述BERT网络的模型参数。
- 根据权利要求17所述的电子设备,其中,在所述基于所述多个第一语义矩阵训练预设的卷积神经网络时,所述处理器用于执行:获取使用所述第二训练集训练所述BERT网络时得到的多个第二语义矩阵;基于所述多个第一语义矩阵和所述多个第二语义矩阵,训练预设的卷积神 经网络。
- 根据权利要求16所述的电子设备,其中,在所述基于所述多个第一语义矩阵训练预设的卷积神经网络时,所述处理器用于执行:基于所述多个第一语义矩阵和预设的损失函数,对预设的卷积神经网络进行迭代训练直至收敛。
- 一种包含电子设备可执行指令的存储介质,其中,所述电子设备可执行指令在由电子设备处理器执行时用于执行如权利要求1至10任一项所述的文本分类方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/114871 WO2021081945A1 (zh) | 2019-10-31 | 2019-10-31 | 一种文本分类方法、装置、电子设备及存储介质 |
CN201980099197.XA CN114207605A (zh) | 2019-10-31 | 2019-10-31 | 一种文本分类方法、装置、电子设备及存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/114871 WO2021081945A1 (zh) | 2019-10-31 | 2019-10-31 | 一种文本分类方法、装置、电子设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021081945A1 true WO2021081945A1 (zh) | 2021-05-06 |
Family
ID=75715730
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/114871 WO2021081945A1 (zh) | 2019-10-31 | 2019-10-31 | 一种文本分类方法、装置、电子设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114207605A (zh) |
WO (1) | WO2021081945A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434699A (zh) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Bert模型的预训练方法、计算机装置和存储介质 |
CN113836302A (zh) * | 2021-09-26 | 2021-12-24 | 平安科技(深圳)有限公司 | 文本分类方法、文本分类装置及存储介质 |
WO2023035940A1 (zh) * | 2021-09-10 | 2023-03-16 | 上海明品医学数据科技有限公司 | 一种目标对象推荐方法及*** |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170308790A1 (en) * | 2016-04-21 | 2017-10-26 | International Business Machines Corporation | Text classification by ranking with convolutional neural networks |
CN109710770A (zh) * | 2019-01-31 | 2019-05-03 | 北京牡丹电子集团有限责任公司数字电视技术中心 | 一种基于迁移学习的文本分类方法及装置 |
CN110147452A (zh) * | 2019-05-17 | 2019-08-20 | 北京理工大学 | 一种基于层级bert神经网络的粗粒度情感分析方法 |
CN110309511A (zh) * | 2019-07-04 | 2019-10-08 | 哈尔滨工业大学 | 基于共享表示的多任务语言分析***及方法 |
CN110334210A (zh) * | 2019-05-30 | 2019-10-15 | 哈尔滨理工大学 | 一种基于bert与lstm、cnn融合的中文情感分析方法 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104834747B (zh) * | 2015-05-25 | 2018-04-27 | 中国科学院自动化研究所 | 基于卷积神经网络的短文本分类方法 |
US10282589B2 (en) * | 2017-08-29 | 2019-05-07 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for detection and classification of cells using convolutional neural networks |
CN109508377A (zh) * | 2018-11-26 | 2019-03-22 | 南京云思创智信息科技有限公司 | 基于融合模型的文本特征提取方法、装置、聊天机器人和存储介质 |
CN109918497A (zh) * | 2018-12-21 | 2019-06-21 | 厦门市美亚柏科信息股份有限公司 | 一种基于改进textCNN模型的文本分类方法、装置及存储介质 |
CN109840279A (zh) * | 2019-01-10 | 2019-06-04 | 山东亿云信息技术有限公司 | 基于卷积循环神经网络的文本分类方法 |
CN110083700A (zh) * | 2019-03-19 | 2019-08-02 | 北京中兴通网络科技股份有限公司 | 一种基于卷积神经网络的企业舆情情感分类方法及*** |
CN109951846B (zh) * | 2019-03-25 | 2020-10-27 | 腾讯科技(深圳)有限公司 | 无线网络识别方法、装置、存储介质及计算机设备 |
CN110059191A (zh) * | 2019-05-07 | 2019-07-26 | 山东师范大学 | 一种文本情感分类方法及装置 |
-
2019
- 2019-10-31 WO PCT/CN2019/114871 patent/WO2021081945A1/zh active Application Filing
- 2019-10-31 CN CN201980099197.XA patent/CN114207605A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170308790A1 (en) * | 2016-04-21 | 2017-10-26 | International Business Machines Corporation | Text classification by ranking with convolutional neural networks |
CN109710770A (zh) * | 2019-01-31 | 2019-05-03 | 北京牡丹电子集团有限责任公司数字电视技术中心 | 一种基于迁移学习的文本分类方法及装置 |
CN110147452A (zh) * | 2019-05-17 | 2019-08-20 | 北京理工大学 | 一种基于层级bert神经网络的粗粒度情感分析方法 |
CN110334210A (zh) * | 2019-05-30 | 2019-10-15 | 哈尔滨理工大学 | 一种基于bert与lstm、cnn融合的中文情感分析方法 |
CN110309511A (zh) * | 2019-07-04 | 2019-10-08 | 哈尔滨工业大学 | 基于共享表示的多任务语言分析***及方法 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113434699A (zh) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Bert模型的预训练方法、计算机装置和存储介质 |
CN113434699B (zh) * | 2021-06-30 | 2023-07-18 | 平安科技(深圳)有限公司 | 用于文本匹配的bert模型的预训练方法、计算机装置和存储介质 |
WO2023035940A1 (zh) * | 2021-09-10 | 2023-03-16 | 上海明品医学数据科技有限公司 | 一种目标对象推荐方法及*** |
CN113836302A (zh) * | 2021-09-26 | 2021-12-24 | 平安科技(深圳)有限公司 | 文本分类方法、文本分类装置及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN114207605A (zh) | 2022-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021169723A1 (zh) | 图像识别方法、装置、电子设备及存储介质 | |
US11703939B2 (en) | Signal processing device and related products | |
CN111133453B (zh) | 人工神经网络 | |
CN111209970B (zh) | 视频分类方法、装置、存储介质及服务器 | |
GB2547068B (en) | Semantic natural language vector space | |
WO2021081945A1 (zh) | 一种文本分类方法、装置、电子设备及存储介质 | |
EP4027267A1 (en) | Method, apparatus and system for identifying text in image | |
US20190228763A1 (en) | On-device neural network adaptation with binary mask learning for language understanding systems | |
US20200126554A1 (en) | Image processing apparatus and method | |
WO2020001196A1 (zh) | 图像处理方法、电子设备、计算机可读存储介质 | |
AU2016256764A1 (en) | Semantic natural language vector space for image captioning | |
JP2020067665A (ja) | 画像処理装置及び方法 | |
US10977819B2 (en) | Electronic device and method for reliability-based object recognition | |
WO2022156561A1 (zh) | 一种自然语言处理方法以及装置 | |
WO2021092808A1 (zh) | 网络模型的训练方法、图像的处理方法、装置及电子设备 | |
WO2022121180A1 (zh) | 模型的训练方法、装置、语音转换方法、设备及存储介质 | |
US10282641B2 (en) | Technologies for classification using sparse coding in real time | |
EP3620982B1 (en) | Sample processing method and device | |
US20240062056A1 (en) | Offline Detector | |
WO2022042120A1 (zh) | 目标图像提取方法、神经网络训练方法及装置 | |
WO2022012205A1 (zh) | 词补全方法和装置 | |
CN110717401A (zh) | 年龄估计方法及装置、设备、存储介质 | |
US20200043477A1 (en) | Sensor-Processing Systems Including Neuromorphic Processing Modules and Methods Thereof | |
WO2021119949A1 (zh) | 文本分类模型训练方法、文本分类方法、装置及电子设备 | |
US11238865B2 (en) | Function performance based on input intonation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19950620 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19950620 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 181022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19950620 Country of ref document: EP Kind code of ref document: A1 |