CN113806537B - Commodity category classification method and device, equipment, medium and product thereof - Google Patents

Commodity category classification method and device, equipment, medium and product thereof Download PDF

Info

Publication number
CN113806537B
CN113806537B CN202111075426.XA CN202111075426A CN113806537B CN 113806537 B CN113806537 B CN 113806537B CN 202111075426 A CN202111075426 A CN 202111075426A CN 113806537 B CN113806537 B CN 113806537B
Authority
CN
China
Prior art keywords
training
layer
classification
text
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111075426.XA
Other languages
Chinese (zh)
Other versions
CN113806537A (en
Inventor
叶朝鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huaduo Network Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN202111075426.XA priority Critical patent/CN113806537B/en
Publication of CN113806537A publication Critical patent/CN113806537A/en
Application granted granted Critical
Publication of CN113806537B publication Critical patent/CN113806537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a commodity category classification method and a device, equipment, media and products thereof, wherein the method comprises the following steps: acquiring a title text corresponding to a commodity object; calling a text feature extraction model to extract text feature information from the title text; in the training process of the text feature extraction model, training the hierarchical structure of the category tree corresponding to commodity classification by using the same training sample layer by layer, and correcting the weight parameter of the text feature extraction model by using the actual loss value of the current layer during each layer of training until the text feature extraction model is trained to a convergence state, wherein the actual loss value is obtained by fusing the actual loss value of the training layer before the loss function value of the current layer; and classifying based on the text characteristic information, and marking classification attributes of the commodity objects with classification results, wherein the classification results comprise category labels of all levels with hierarchical membership in the category tree. The model of the application is easy to train to converge with high efficiency.

Description

Commodity category classification method and device, equipment, medium and product thereof
Technical Field
The present application relates to the field of electronic commerce information technology, and in particular, to a method for classifying commodity categories, a corresponding apparatus, a computer device, a computer readable storage medium, and a computer program product.
Background
The electronic commerce platform has a plurality of commodities, the number of the commodities often reaches thousands of levels or more, and the commodities can be efficiently organized by means of multiple levels. Between the multiple categories, the sub-categories are typically subordinate to the parent category and spread out layer by layer to form a "category tree". The deeper the hierarchy of the category tree is considered, the greater the churned, and therefore, the category tree typically includes three, four, and typically no more than five levels. And on the data level, the category tree realizes the organization of a large number of commodity objects in the e-commerce platform by using a multi-level classification structure, and is convenient for maintenance such as new addition, inquiry, update and the like.
The title text of the commodity can be used as a basis for classification, deep semantic feature information of the title text can be extracted by means of a deep semantic model based on a natural language processing technology, and the classification of corresponding commodity objects can be realized according to the feature information. However, as the hierarchy in the category tree increases, the geometric level of the leaf categories increases, wherein some of the leaf categories correspond to more commodity objects and some of the leaf categories correspond to fewer commodity objects, i.e., the commodity objects are not uniformly distributed in the leaf categories. This presents difficulties for training neural network models based on deep semantics. As is well known, the training of the neural network model is required to be performed by depending on a large number of labeling training samples, if the training is performed by using the title text of the commodity object only for the path from the top layer of the category tree to the category of the leaves, the model is easily over-fitted due to the fact that the training samples of part of the category of the leaves are fewer, the prediction result is inaccurate, and the model is not available.
Therefore, for this situation, there is a need to improve the training process of deep semantic models for category tree classification according to commodity title text to ensure that models trained therefrom can more accurately predict classification results, while serving commodity object intelligent classification services of e-commerce platforms.
Disclosure of Invention
It is a primary object of the present application to solve at least one of the above problems and provide a commodity category classification method and corresponding apparatus, computer device, computer readable storage medium, computer program product for implementing auxiliary music creation.
In order to meet the purposes of the application, the application adopts the following technical scheme:
The commodity category classification method provided by adapting to one of the purposes of the application comprises the following steps:
Acquiring a title text corresponding to a commodity object;
Calling a text feature extraction model to extract text feature information from the title text; in the training process of the text feature extraction model, training the hierarchical structure of the category tree corresponding to commodity classification by using the same training sample layer by layer, and correcting the weight parameter of the text feature extraction model by using the actual loss value of the current layer during each layer of training until the text feature extraction model is trained to a convergence state, wherein the actual loss value is obtained by fusing the actual loss value of the training layer before the loss function value of the current layer;
And classifying based on the text characteristic information, and marking classification attributes of the commodity objects with classification results, wherein the classification results comprise category labels of all levels with hierarchical membership in the category tree.
In a further embodiment, the training process of the text feature extraction model includes the following steps:
creating a plurality of training tasks to perform training for each layer in the category tree;
inputting the same training sample for each training task to start training;
And controlling each training task to transmit the actual loss value of each training layer from the top layer to the bottom layer according to the hierarchical structure of the category tree, so that the corresponding layer fuses the actual loss value to realize the weight parameter correction of the text feature model in the corresponding training task.
In a further embodiment, the training process of the text feature extraction model includes the step of training for each layer in the classification structure of the category tree:
extracting text characteristic information from the training sample;
Inputting the text characteristic information into a classification model for classification to obtain a classification result corresponding to the current layer;
Carrying out weighted summation on the loss function value of the current layer and the respective actual loss values of all the prior training layers to obtain the actual loss value of the current layer;
and (5) utilizing the actual loss value of the current layer to back propagate and correct the weight parameter of the text feature extraction model to realize gradient updating.
In a further embodiment, the text feature information is extracted from the title text by calling a text feature extraction model, which comprises the following steps:
Constructing an embedded vector corresponding to each word segmentation in the title text;
splicing a left side vector and a right side vector which characterize the context semantics of each embedded vector, and constructing an intermediate feature vector;
and executing pooling operation on the intermediate feature vectors to obtain vectors representing the semantics of the title text as the text feature information.
In a further embodiment, marking the classification attribute of the commodity object with the classification result includes the following steps:
determining a plurality of category labels according to the classification result;
Inquiring a preset dictionary to obtain category texts corresponding to the plurality of category labels;
And assigning the category text as the classification attribute of the commodity object.
In a preferred embodiment, in the training process of the text feature extraction model, the training process of each level is operated by multitasking.
A commodity category classification device provided in accordance with one of the objects of the present application includes: the device comprises a title acquisition module, a characteristic extraction module and a classification marking module; the title acquisition module is used for acquiring title text corresponding to the commodity object; the feature extraction module is used for calling a text feature extraction model to extract text feature information from the title text; in the training process of the text feature extraction model, training the hierarchical structure of the category tree corresponding to commodity classification by using the same training sample layer by layer, and correcting the weight parameter of the text feature extraction model by using the actual loss value of the current layer during each layer of training until the text feature extraction model is trained to a convergence state, wherein the actual loss value is obtained by fusing the actual loss value of the training layer before the loss function value of the current layer; the classification marking module is used for classifying based on the text characteristic information, and marking classification attributes of the commodity objects with classification results, wherein the classification results comprise category labels of all levels with hierarchical membership in the category tree.
In a further embodiment, in the training process of the text feature extraction model, the text feature extraction model is operated in the following structure: the task creation module is used for creating a plurality of training tasks to implement training for each layer in the category tree; the training starting module is used for inputting the same training sample for each training task to start training; the transmission control module is used for controlling each training task to transmit the actual loss value of each training layer from the top layer to the bottom layer according to the hierarchical structure of the category tree, so that the corresponding layer fuses the actual loss value to realize the weight parameter correction of the text feature model in the corresponding training task.
In a further embodiment, the training process of the text feature extraction model includes a running structure trained for each layer in the classification structure of the category tree: the feature extraction example module is used for extracting text feature information from the training sample; the classification example module is used for inputting the text characteristic information into a classification model to classify, and obtaining a classification result corresponding to the current layer; the loss superposition example module is used for carrying out weighted summation on the loss function value of the current layer and the respective actual loss values of all the prior training layers to obtain the actual loss value of the current layer; and the correction module is used for utilizing the actual loss value of the current layer to back propagate and correct the weight parameter of the text feature extraction model to realize gradient update.
In a further embodiment, the feature extraction module, for calling a text feature extraction model to extract text feature information from the title text, includes: the vector construction submodule is used for constructing embedded vectors corresponding to each word segmentation in the title text; the semantic quoting sub-module is used for splicing a left side vector and a right side vector which characterize the context semantic of each embedded vector to construct an intermediate feature vector; and the pooling abstraction sub-module is used for executing pooling operation on the intermediate feature vector to obtain a vector representing the semantics of the title text as the text feature information.
In a further embodiment, the classification marking module includes: the label conversion sub-module is used for determining a plurality of category labels according to the classification result; the dictionary inquiry sub-module is used for inquiring a preset dictionary to obtain category texts corresponding to the plurality of category labels; and the object assignment sub-module is used for assigning the category text as the classification attribute of the commodity object.
In a preferred embodiment, in the training process of the text feature extraction model, the training process of each level is operated by multitasking.
A computer device provided in accordance with one of the objects of the present application comprises a central processor and a memory, said central processor being adapted to invoke the steps of executing a computer program stored in said memory for performing the method for classifying categories of goods according to the present application.
A computer-readable storage medium adapted to another object of the present application stores, in the form of computer-readable instructions, a computer program implemented according to the commodity category classification method, which when invoked by a computer, performs the steps comprised by the method.
A computer program product is provided adapted to the further object of the application, comprising a computer program/instruction which, when executed by a processor, carries out the steps of the method according to any of the embodiments of the application.
Compared with the prior art, the application has the following advantages:
According to the text feature extraction model, as the training stage aims at the hierarchical structure of the category tree and trains layer by layer, when the text feature extraction model trains the category tree of the next layer, the actual loss values for gradient update obtained by the training-first layer of the category tree are fused on the basis of the loss function value of the current layer, and the actual loss values of all the training-first layers are referenced, so that the text feature extraction model inherits the loss information corresponding to each layer of the category tree successively, and finally the capability of uniformly expressing learning and realizing classification aiming at the hierarchical structure of the whole category tree is obtained.
In the training process, it can be understood that the supervision label of the same training sample is a path formed by each category label of each level in the category tree, and the category labels have a hierarchical membership, so that a layer-by-layer training mode is adopted, which is equivalent to continuously narrowing the data distribution range of the next layer, so that when the training is performed on the last layer, namely the leaf category, even if the training sample of the leaf category is less, the loss function can be quickly converged, a model with higher learning ability is trained, the correct classification of commodity objects is served, the training efficiency is high, the training cost is low, and the training effect is good.
Accordingly, after the text feature extraction model of the application is trained, the text feature extraction model can be classified according to the title text corresponding to the commodity object.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of an exemplary embodiment of a method for classifying categories of merchandise according to the present application;
FIG. 2 is a schematic diagram of a network architecture for implementing the merchandise category classification method of the present application;
FIG. 3 is a schematic flow chart of a text feature extraction model training process according to an embodiment of the application;
FIG. 4 is a flow chart of a single training task in an embodiment of the present application;
FIG. 5 is a schematic workflow diagram of a text feature extraction model based on TextRCNN models in an embodiment of the present application;
FIG. 6 is a schematic diagram of a network structure of TextRCNN employed in an embodiment of the present application;
FIG. 7 is a flow chart illustrating a process for performing classification marking according to the classification result according to the present application;
FIG. 8 is a schematic block diagram of a merchandise category classification device of the present application;
fig. 9 is a schematic structural diagram of a computer device used in the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, "client," "terminal device," and "terminal device" are understood by those skilled in the art to include both devices that include only wireless signal receivers without transmitting capabilities and devices that include receiving and transmitting hardware capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device such as a personal computer, tablet, or the like, having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service, personal communications System) that may combine voice, data processing, facsimile and/or data communications capabilities; PDA (Personal DIGITAL ASSISTANT ) that may include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, "client," "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, at any other location(s) on earth and/or in space. As used herein, a "client," "terminal device," or "terminal device" may also be a communication terminal, an internet terminal, or a music/video playing terminal, for example, may be a PDA, a MID (Mobile INTERNET DEVICE ), and/or a Mobile phone with a music/video playing function, or may also be a device such as a smart tv, a set top box, or the like.
The application refers to hardware such as a server, a client, a service node, and the like, which essentially is an electronic device with personal computer and other functions, and is a hardware device with necessary components disclosed by von neumann principles such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, and the like, wherein a computer program is stored in the memory, and the central processing unit calls the program stored in the memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing specific functions.
It should be noted that the concept of the present application, called "server", is equally applicable to the case of server clusters. The servers should be logically partitioned, physically separate from each other but interface-callable, or integrated into a physical computer or group of computers, according to network deployment principles understood by those skilled in the art. Those skilled in the art will appreciate this variation and should not be construed as limiting the implementation of the network deployment approach of the present application.
One or more technical features of the present application, unless specified in the clear, may be deployed either on a server for implementation and the client remotely invokes an online service interface provided by the acquisition server for implementation of the access, or may be deployed and run directly on the client for implementation of the access.
The neural network model cited or possibly cited in the application can be deployed on a remote server and can be used for implementing remote call on a client, or can be deployed on a client with sufficient equipment capability for direct call, unless specified by plaintext, and in some embodiments, when the neural network model runs on the client, the corresponding intelligence can be obtained through migration learning so as to reduce the requirement on the running resources of the hardware of the client and avoid excessively occupying the running resources of the hardware of the client.
The various data related to the present application, unless specified in the plain text, may be stored either remotely in a server or in a local terminal device, as long as it is suitable for being invoked by the technical solution of the present application.
Those skilled in the art will appreciate that: although the various methods of the present application are described based on the same concepts so as to be common to each other, the methods may be performed independently of each other unless specifically indicated otherwise. Similarly, for the various embodiments disclosed herein, all concepts described herein are presented based on the same general inventive concept, and thus, concepts described herein with respect to the same general inventive concept, and concepts that are merely convenient and appropriately modified, although different, should be interpreted as equivalents.
The various embodiments of the present application to be disclosed herein, unless the plain text indicates a mutually exclusive relationship with each other, the technical features related to the various embodiments may be cross-combined to flexibly construct a new embodiment as long as such combination does not depart from the inventive spirit of the present application and can satisfy the needs in the art or solve the deficiencies in the prior art. This variant will be known to the person skilled in the art.
The commodity category classification method can be programmed into a computer program product and deployed in a client and/or a server to operate, so that the client can access an interface opened after the computer program product operates in the form of a webpage program or an application program, and man-machine interaction is realized with the process of the computer program product through a graphical user interface.
Referring to fig. 1 and 2, in an exemplary embodiment, the method is implemented by the network architecture shown in fig. 2, and includes the steps of:
Step S1100, acquiring a title text corresponding to a commodity object:
The application relates to an application scene, which is an application in an e-commerce platform based on independent stations, wherein each independent station is a merchant instance of the e-commerce platform and is provided with an independent access domain name, and an actual owner of the independent station is responsible for issuing and updating commodities.
And the merchant instance of the independent station is used for online each commodity, and after the e-commerce platform acquires information related to the commodity, a corresponding commodity object is constructed for data storage. The information of the commodity object mainly comprises text information and picture information, wherein the text information comprises title information of the commodity object for displaying, content information for introducing commodity details, attribute information for describing commodity characteristics and the like.
In order to implement the technical scheme of the application, the abstract text of the commodity object can be collected, the abstract text mainly adopts the title information in the commodity object, and one or more items of attribute information of the abstract text can be enhanced if necessary. In general, the digest text may be acquired in accordance with a preset number and content requirements, for example, the digest text may be specified to include title information of a commodity object and attribute information of all attribute items thereof. Of course, the man skilled in the art can flexibly adapt the process on the basis of this.
Finally, the abstract text is used as the title text for implementing classification judgment on commodity objects.
Step S1200, calling a text feature extraction model to extract text feature information from the title text; in the training process of the text feature extraction model, training the hierarchical structure of the category tree corresponding to commodity classification by using the same training sample layer by layer, and correcting the weight parameter of the text feature extraction model by using the actual loss value of the current layer during each layer of training until the text feature extraction model is trained to a convergence state, wherein the actual loss value is obtained by fusing the actual loss value of the training layer before the loss function value of the current layer:
And the text feature extraction model shown in fig. 2 is used for vectorizing the title text and then extracting deep semantic features to obtain corresponding text feature information. The text feature extraction model is trained in advance to reach a convergence state. The text feature extraction model does not need to be specific, and comprises, but is not limited to, a network model based on deep semantic learning, which is common in the past, such as Bert, textRCNN and the like, for extracting text feature information. Although the application will be described by way of example with reference to TextRCNN, this should not be taken as limiting the scope of the application which is to be covered by the inventive concepts.
It can be understood that the text feature extraction model realizes the classification capability of classifying the commodity object based on the multi-level structure of the category tree, and is realized through training of a network architecture as shown in fig. 2, and the key is that the text feature extraction model is trained to learn the representation learning capability required for classifying the title text. Therefore, the focus of training of the network architecture is to train the text feature extraction model so that it learns the corresponding capabilities.
In order for the text feature extraction model to learn the representation learning capabilities, it may be implemented in a layer-by-layer training manner for the hierarchical structure of the category tree.
In this embodiment, the basic principle of training the text feature extraction model is that the same training sample is used as input data of the text feature extraction model, layer-by-layer training is performed from the top layer in the hierarchical structure of the category tree, corresponding text feature information is extracted, and then classification is performed by the classification model according to the text feature information, so as to obtain a classification result.
For the training task corresponding to each layer of the category tree, the corresponding loss function value of the current layer is calculated according to the classification result. The classification model can be calculated by applying a cross quotient loss function to obtain a loss function value of the current layer, and the method is known to a person skilled in the art, and the formula is as follows:
since those skilled in the art know the cross entropy function, explanation thereof is omitted.
Starting from the training tasks corresponding to the top layer of the category tree, organizing the hierarchical sequence of the training tasks towards the direction of the bottom layer in the hierarchical structure, so that each training task can be regarded as the corresponding training task of each layer. It will be appreciated that each layer is trained based on the same training sample and calculated accordingly to obtain its own cross entropy loss function value.
For each layer of the training process, the actual loss value for implementing gradient update is a total loss value fused with the actual loss value of all the previous training layers on the basis of the cross entropy loss function value of the current layer, and the calculation formula of the actual loss value required by each current layer for back propagation is as follows:
Wherein w t is a weight parameter corresponding to each layer, L i is an actual loss value corresponding to each layer and participating in calculation, wherein the actual loss value of the current layer is not calculated, so that the cross entropy loss function value of the current layer is adopted for replacing calculation, so that the fusion of the current layer loss function value and the actual loss values corresponding to all other layers prior to the current layer training is realized.
From this formula, it can be seen that when the current layer calculates the actual loss value required for gradient update, the actual loss values of other layers trained prior to the current layer are referred to for weighted summation, for example, when the current layer is the third layer, the actual loss value corresponding to the first layer (top layer) and the actual loss value corresponding to the second layer are referred to, and the cross entropy loss function value of the third layer itself is referred to. For the calculation of the actual loss value of the first layer, the actual loss value is the cross entropy loss function value due to the lack of the previous training layer. The second, fourth, or other nth layer is the same as the third layer.
It follows that as the training proceeds layer by layer, the actual loss value of the next layer is always limited by the actual loss values of all the previous training layers, and thus, when the weight parameters of the model are extracted by the back propagation corrected text feature, the actual loss values from the top layer to the bottom layer of the whole hierarchy are integrated. According to the method, training is iterated continuously, so that the network architecture reaches a convergence state, training can be completed, and the text feature extraction model required by the method is obtained, so that the method has the capability of carrying out deep semantic representation learning on the title text required by classification. So that text feature information required for classification can be extracted from the headline text using the text feature extraction model.
Step S1300, classifying based on the text feature information, and marking classification attributes of the commodity object with classification results, where the classification results include category labels of each level having a hierarchical membership in the category tree:
After the text feature extraction model of the network architecture trained to a convergence state obtains corresponding text feature information from the title text, the text feature information can be classified by using the classification model to obtain a corresponding classification result. The classification result comprises probability scores of category labels of each level of the hierarchical structure of the category tree, wherein the probability scores of category labels of each level are mapped to the mapping text, the maximum value is contained in a plurality of category labels corresponding to each level, the corresponding maximum value of each level is extracted, and a classification path with a level membership is obtained to finish classification.
The classification path comprises a plurality of category labels forming a hierarchical membership, so that classification marking process can be completed by marking classification attributes of commodity objects corresponding to the title text with category texts corresponding to the category labels.
According to the text feature extraction model, as the training stage aims at the hierarchical structure of the category tree and trains layer by layer, when the text feature extraction model trains the category tree of the next layer, the actual loss values for gradient update obtained by the training-first layer of the category tree are fused on the basis of the loss function value of the current layer, and the actual loss values of all the training-first layers are referenced, so that the text feature extraction model inherits the loss information corresponding to each layer of the category tree successively, and finally the capability of uniformly expressing learning and realizing classification aiming at the hierarchical structure of the whole category tree is obtained.
In the training process, it can be understood that the supervision label of the same training sample is a path formed by each category label of each level in the category tree, and the category labels have a hierarchical membership, so that a layer-by-layer training mode is adopted, which is equivalent to continuously narrowing the data distribution range of the next layer, so that when the training is performed on the last layer, namely the leaf category, even if the training sample of the leaf category is less, the loss function can be quickly converged, a model with higher learning ability is trained, the correct classification of commodity objects is served, the training efficiency is high, the training cost is low, and the training effect is good.
Accordingly, after the text feature extraction model of the application is trained, the text feature extraction model can be classified according to the title text corresponding to the commodity object.
Referring to fig. 3, in a deepened embodiment, the training process of the text feature extraction model includes the following steps:
Step S2100, creating a plurality of training tasks to perform training for each layer in the category tree:
The number of layers of the category tree is generally three, four and five, and a corresponding plurality of training tasks can be created in a multitasking training manner so that each training task is responsible for training of a corresponding layer.
Step S2200, inputting the same training sample for each training task to start training:
In order to train multiple training tasks based on the same training sample, therefore, for each iterative process in the training process, the same training sample is called from the training data set and transmitted to multiple training tasks at the same time, so that each training task trains based on the same training sample.
Step S2300, controlling each training task to transmit the actual loss value of each training layer from the top layer to the bottom layer according to the hierarchical structure of the category tree, so that the corresponding layer fuses the actual loss value to realize the correction of the weight parameters of the text feature model in the corresponding training task:
Although the present embodiment trains by multitasking, the example of the text feature extraction model of each task needs to obtain the actual loss value of the previous training level for implementing gradient update according to the level membership, therefore, basically, a serial relationship exists among a plurality of training tasks, so in the training process, each training task corresponding to the corresponding category tree from the top layer to the bottom layer is controlled to train layer by layer, the current layer uses the cross entropy loss function value of the current layer and the actual loss value of other layers preceding the training to carry out weighted summation, the total loss value is obtained as the actual loss value of the current layer, the network architecture is gradient updated, the actual loss value of the current layer is transmitted to the corresponding training tasks of the layers below the current layer, so that the actual loss value of the training tasks of the layers below the current layer can be referenced, the actual loss value of the training tasks of the layers below the current layer is nested in such a way, each layer is used for realizing the restriction of the actual loss value of each upper layer of the training tasks, the text feature extraction model is realized on the basis of the restriction of the actual loss value of each upper layer, and the training of the current layer is achieved.
In this embodiment, by constructing a multi-task training mechanism, training the text feature extraction model in corresponding layers in the training tasks corresponding to each layer based on the same training sample, and then spreading actual loss values for reference of each layer subordinate to the current layer between different training tasks in a serial relationship, a multi-task collaborative training mechanism is realized, so that training efficiency can be greatly improved, and the model converges more quickly.
Referring to fig. 4, in a further embodiment, the training process of the text feature extraction model in each training task is mainly disclosed, so that the training step for each layer in the classification structure of the category tree includes:
step S3100, extracting text feature information from the training sample:
for each training task, the text feature extraction model is utilized to extract the corresponding text feature information from the training samples.
Step S3200, inputting the text characteristic information into a classification model for classification, and obtaining a classification result corresponding to the current layer:
It can be seen that in each training task, it is actually an exemplary reference of the network architecture shown in fig. 2, so that the classification model can classify according to the text feature information extracted from the training samples, and obtain a corresponding classification result. The content of the classification result is described in the foregoing examples.
Step S3300, performing weighted summation on the loss function value of the current layer and the respective actual loss values of all the previous training layers to obtain an actual loss value of the current layer:
The pre-training layer is relative to the current layer. The first training layer is also a layer higher than the current layer in the category tree, so that the corresponding training task is trained to obtain the corresponding actual loss value. And after the current layer is classified by the classification model, the loss function value of the current layer can be calculated and obtained according to the cross entropy function. Accordingly, as described in the foregoing embodiments, the loss function value of the current layer and the actual loss values of all the previous training layers that precede the current layer training may be weighted and summed to obtain the actual loss value of the current layer.
Step S3400, realizing gradient update by utilizing the weight parameters of the actual loss value back propagation corrected text feature extraction model of the current layer:
As previously described, the actual loss value of the current layer is used to gradient update the network architecture, back-propagating the weight parameters of the revised text feature model, so that typically such multiple iterations, the entire network slowly reaches convergence.
The embodiment reveals the specific execution process in each training task, and as can be seen, a plurality of training tasks can be realized by the same business logic, so that the realization principle is simple, the development cost is lower, and the realization is easy.
Referring to fig. 5, in a deepened embodiment, textRCNN is taken as a text feature extraction model for illustration, and the step S1200 of calling the text feature extraction model to extract text feature information from the title text includes the following steps:
Step S1210, constructing an embedded vector corresponding to each word segment in the title text:
as shown in fig. 6, a text feature extraction model constructed based on TextRCNN, which includes one convolutional layer for encoding, followed by a pooling layer, is then output.
For a title text, format preprocessing may be performed first, and after word segmentation, vectorizing the text to obtain an embedded vector e (w i) corresponding to each word segment. The embedded vector is a representation of the corresponding word, and can be determined by combining a preset dictionary mapping relation, so that the word is converted into a vector from the text.
Step S1220, concatenating the left side vector and the right side vector characterizing the context semantics for each embedded vector, and constructing an intermediate feature vector:
According to TextRCNN principle, when coding the embedded vector of each word, the left side vector C l(wi) and the right side vector C r(wi need to be spliced according to the context, wherein the left side vector is the semantic reference information of the previous word of the word, the right side vector is the semantic reference information of the next word of the word, after the left side vector, the embedded vector and the right side vector are spliced, the spliced result is further subjected to linear transformation, so that the corresponding intermediate characteristic information of each word is obtained.
Step S1230, performing pooling operation on the intermediate feature vector to obtain a vector representing the semantics of the heading text as the text feature information:
Further, a pooling operation is performed on the basis of the intermediate feature vector, the intermediate feature vector is mapped into a vector representing the semantics of the heading text, and coding is completed, and the vector can be used as the text feature information.
The network structure of the text feature extraction model and the coding process thereof are further provided in the embodiment, so that readers can more easily understand the inventiveness of the application.
Referring to fig. 7, in a deepened embodiment, in the step S1300, the classification attribute of the commodity object is marked with the classification result, which includes the following steps:
Step S1310, determining a plurality of category labels according to the classification result:
As described above, when the network architecture of the present application is used in the production stage, the classification result obtained by classifying the classification model includes the probability score mapped to each classification label in the hierarchical structure of the category tree, and a classification path can be determined by this probability score, that is, a corresponding classification label is determined in each hierarchy, and these classification labels have hierarchical membership. Thus, the corresponding classification label can be determined according to the probability scores in the classification results.
Step S1320, inquiring a preset dictionary to obtain category texts corresponding to the plurality of category labels;
a dictionary for storing mappings between category labels to values in the category tree hierarchy, which are the classification labels in the classification result of the classification model, may be pre-constructed, whereby the mapping relationship between the category labels to the classification labels is determined in the dictionary. Thus, with this dictionary, it is possible to query the category text corresponding to each classification label of the classification path therein.
Step S1330, assigning the category text as a classification attribute of the commodity object:
And each commodity object is associated with a classification attribute, and the classification of the commodity object can be completed by assigning the obtained multiple category texts to the classification attribute of the commodity object corresponding to the title text.
The embodiment provides a process of marking and classifying the commodity object according to the classification result, and can be seen that the process can be automatically realized, and the automatic classification efficiency of the e-commerce platform can be greatly improved.
Referring to fig. 8, a commodity category classification device according to one of the objects of the present application is a functional implementation of the commodity category classification method of the present application, and the device includes: title acquisition module 1100, feature extraction module 1200, and category tagging module 1300; the title obtaining module 1100 is configured to obtain a title text corresponding to a commodity object; the feature extraction module 1200 is configured to invoke a text feature extraction model to extract text feature information from the title text; in the training process of the text feature extraction model, training the hierarchical structure of the category tree corresponding to commodity classification by using the same training sample layer by layer, and correcting the weight parameter of the text feature extraction model by using the actual loss value of the current layer during each layer of training until the text feature extraction model is trained to a convergence state, wherein the actual loss value is obtained by fusing the actual loss value of the training layer before the loss function value of the current layer; the classification marking module 1300 is configured to perform classification based on the text feature information, and mark classification attributes of the commodity object with classification results, where the classification results include category labels of each level having a hierarchical membership in the category tree.
In a further embodiment, in the training process of the text feature extraction model, the text feature extraction model is operated in the following structure: the task creation module is used for creating a plurality of training tasks to implement training for each layer in the category tree; the training starting module is used for inputting the same training sample for each training task to start training; the transmission control module is used for controlling each training task to transmit the actual loss value of each training layer from the top layer to the bottom layer according to the hierarchical structure of the category tree, so that the corresponding layer fuses the actual loss value to realize the weight parameter correction of the text feature model in the corresponding training task.
In a further embodiment, the training process of the text feature extraction model includes a running structure trained for each layer in the classification structure of the category tree: the feature extraction example module is used for extracting text feature information from the training sample; the classification example module is used for inputting the text characteristic information into a classification model to classify, and obtaining a classification result corresponding to the current layer; the loss superposition example module is used for carrying out weighted summation on the loss function value of the current layer and the respective actual loss values of all the prior training layers to obtain the actual loss value of the current layer; and the correction module is used for utilizing the actual loss value of the current layer to back propagate and correct the weight parameter of the text feature extraction model to realize gradient update.
In a further embodiment, the feature extraction module 1200 invokes a text feature extraction model to extract text feature information from the title text, including: the vector construction submodule is used for constructing embedded vectors corresponding to each word segmentation in the title text; the semantic quoting sub-module is used for splicing a left side vector and a right side vector which characterize the context semantic of each embedded vector to construct an intermediate feature vector; and the pooling abstraction sub-module is used for executing pooling operation on the intermediate feature vector to obtain a vector representing the semantics of the title text as the text feature information.
In a further embodiment, the classification marking module 1300 includes: the label conversion sub-module is used for determining a plurality of category labels according to the classification result; the dictionary inquiry sub-module is used for inquiring a preset dictionary to obtain category texts corresponding to the plurality of category labels; and the object assignment sub-module is used for assigning the category text as the classification attribute of the commodity object.
In a preferred embodiment, in the training process of the text feature extraction model, the training process of each level is operated by multitasking.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. As shown in fig. 9, the internal structure of the computer device is schematically shown. The computer device includes a processor, a computer readable storage medium, a memory, and a network interface connected by a system bus. The computer readable storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store a control information sequence, and the computer readable instructions can enable the processor to realize a commodity category classification method when the computer readable instructions are executed by the processor. The processor of the computer device is used to provide computing and control capabilities, supporting the operation of the entire computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform the merchandise category classification method of the present application. The network interface of the computer device is for communicating with a terminal connection. It will be appreciated by persons skilled in the art that the architecture shown in fig. 9 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The processor in this embodiment is configured to execute specific functions of each module and its sub-module in fig. 8, and the memory stores program codes and various data required for executing the above modules or sub-modules. The network interface is used for data transmission between the user terminal or the server. The memory in the present embodiment stores program codes and data required for executing all modules/sub-modules in the commodity category classifying device according to the present application, and the server can call the program codes and data of the server to execute the functions of all sub-modules.
The present application also provides a storage medium storing computer readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the method of classifying categories of merchandise of any of the embodiments of the present application.
The application also provides a computer program product comprising computer programs/instructions which when executed by one or more processors implement the steps of the method of any of the embodiments of the application.
Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments of the present application may be implemented by a computer program for instructing relevant hardware, where the computer program may be stored on a computer readable storage medium, where the program, when executed, may include processes implementing the embodiments of the methods described above. The storage medium may be a computer readable storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
In summary, the method adopts a mode of carrying out layer-by-layer training on the multi-layer structure of the category tree to reduce the number of training samples required by the text feature extraction model for representing the learning ability, reduces the training cost, improves the training efficiency, and improves the ability of the text feature extraction model for representing the learning ability required by classifying the commodity categories.
Those of skill in the art will appreciate that the various operations, methods, steps in the flow, acts, schemes, and alternatives discussed in the present application may be alternated, altered, combined, or eliminated. Further, other steps, means, or steps in a process having various operations, methods, or procedures discussed herein may be alternated, altered, rearranged, disassembled, combined, or eliminated. Further, steps, measures, schemes in the prior art with various operations, methods, flows disclosed in the present application may also be alternated, altered, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (8)

1. A method for classifying categories of goods, comprising the steps of:
Acquiring a title text corresponding to a commodity object;
Calling a text feature extraction model to extract text feature information from the title text; in the training process of the text feature extraction model, training the hierarchical structure of the category tree corresponding to commodity classification by using the same training sample layer by layer, and correcting the weight parameter of the text feature extraction model by using the actual loss value of the current layer during each layer of training until the text feature extraction model is trained to a convergence state, wherein the actual loss value is obtained by fusing the actual loss value of the training layer before the loss function value of the current layer;
Classifying based on text characteristic information corresponding to abstract text containing title information of commodity objects of an independent station, and marking classification attributes of the commodity objects with classification results, wherein the classification results comprise category labels of all levels with hierarchical membership in the category tree;
the training process of the text feature extraction model comprises the following steps:
creating a plurality of training tasks to perform training for each layer in the category tree;
inputting the same training sample for each training task to start training;
controlling each training task to transmit the actual loss value of each training layer from the top layer to the bottom layer according to the hierarchical structure of the category tree, so that the corresponding layer fuses the actual loss value to realize the weight parameter correction of the text feature model in the corresponding training task;
the training process of the text feature extraction model comprises the steps of training for each layer in the classification structure of the category tree:
extracting text characteristic information from the training sample;
Inputting the text characteristic information into a classification model for classification to obtain a classification result corresponding to the current layer;
Carrying out weighted summation on the loss function value of the current layer and the respective actual loss values of all the prior training layers to obtain the actual loss value of the current layer;
and (5) utilizing the actual loss value of the current layer to back propagate and correct the weight parameter of the text feature extraction model to realize gradient updating.
2. The commodity category classification method of claim 1, wherein invoking a text feature extraction model to extract text feature information from the title text comprises the steps of:
Constructing an embedded vector corresponding to each word segmentation in the title text;
splicing a left side vector and a right side vector which characterize the context semantics of each embedded vector, and constructing an intermediate feature vector;
and executing pooling operation on the intermediate feature vectors to obtain vectors representing the semantics of the title text as the text feature information.
3. The commodity category classification method according to claim 1 or 2, characterized in that the classification attribute of the commodity object is marked with a classification result, comprising the steps of:
determining a plurality of category labels according to the classification result;
Inquiring a preset dictionary to obtain category texts corresponding to the plurality of category labels;
And assigning the category text as the classification attribute of the commodity object.
4. The method of claim 1 or 2, wherein the training process of each level of the training process of the text feature extraction model is performed in a multi-tasking manner.
5. A commodity category classification device, comprising:
the title acquisition module is used for acquiring title text corresponding to the commodity object;
The feature extraction module is used for calling a text feature extraction model to extract text feature information from the title text; in the training process of the text feature extraction model, training the hierarchical structure of the category tree corresponding to commodity classification by using the same training sample layer by layer, and correcting the weight parameter of the text feature extraction model by using the actual loss value of the current layer during each layer of training until the text feature extraction model is trained to a convergence state, wherein the actual loss value is obtained by fusing the actual loss value of the training layer before the loss function value of the current layer;
The classification marking module is used for classifying based on text characteristic information corresponding to abstract text containing title information of the commodity object of the independent station, and marking classification attributes of the commodity object with classification results, wherein the classification results comprise category labels of all levels with hierarchical membership in the category tree;
In the training process of the text feature extraction model, the text feature extraction model operates in the following structure: the task creation module is used for creating a plurality of training tasks to implement training for each layer in the category tree; the training starting module is used for inputting the same training sample for each training task to start training; the transmission control module is used for controlling each training task to transmit the actual loss value of each training layer from the top layer to the bottom layer according to the hierarchical structure of the category tree so that the corresponding layer fuses the actual loss value to realize the weight parameter correction of the text feature model in the corresponding training task;
The training process of the text feature extraction model comprises a running structure trained for each layer in the classification structure of the category tree: the feature extraction example module is used for extracting text feature information from the training sample; the classification example module is used for inputting the text characteristic information into a classification model to classify, and obtaining a classification result corresponding to the current layer; the loss superposition example module is used for carrying out weighted summation on the loss function value of the current layer and the respective actual loss values of all the prior training layers to obtain the actual loss value of the current layer; and the correction module is used for utilizing the actual loss value of the current layer to back propagate and correct the weight parameter of the text feature extraction model to realize gradient update.
6. A computer device comprising a central processor and a memory, characterized in that the central processor is arranged to invoke a computer program stored in the memory for performing the steps of the method according to any of claims 1 to 4.
7. A computer-readable storage medium, characterized in that it stores in the form of computer-readable instructions a computer program implemented according to the method of any one of claims 1 to 4, which, when invoked by a computer, performs the steps comprised by the corresponding method.
8. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 4.
CN202111075426.XA 2021-09-14 2021-09-14 Commodity category classification method and device, equipment, medium and product thereof Active CN113806537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111075426.XA CN113806537B (en) 2021-09-14 2021-09-14 Commodity category classification method and device, equipment, medium and product thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111075426.XA CN113806537B (en) 2021-09-14 2021-09-14 Commodity category classification method and device, equipment, medium and product thereof

Publications (2)

Publication Number Publication Date
CN113806537A CN113806537A (en) 2021-12-17
CN113806537B true CN113806537B (en) 2024-06-28

Family

ID=78895263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111075426.XA Active CN113806537B (en) 2021-09-14 2021-09-14 Commodity category classification method and device, equipment, medium and product thereof

Country Status (1)

Country Link
CN (1) CN113806537B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548323A (en) * 2022-04-18 2022-05-27 阿里巴巴(中国)有限公司 Commodity classification method, equipment and computer storage medium
CN117892799B (en) * 2024-03-15 2024-06-04 中国科学技术大学 Financial intelligent analysis model training method and system with multi-level tasks as guidance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110209817A (en) * 2019-05-31 2019-09-06 安徽省泰岳祥升软件有限公司 Training method and device of text processing model and text processing method
CN112801720A (en) * 2021-04-12 2021-05-14 连连(杭州)信息技术有限公司 Method and device for generating shop category identification model and identifying shop category

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019219083A1 (en) * 2018-05-18 2019-11-21 北京中科寒武纪科技有限公司 Video retrieval method, and method and apparatus for generating video retrieval mapping relationship
CN110955772B (en) * 2018-09-26 2023-06-06 阿里巴巴集团控股有限公司 Text structured model component deployment method, device, equipment and storage medium
CN109871885B (en) * 2019-01-28 2023-08-04 南京林业大学 Plant identification method based on deep learning and plant taxonomy
CN111191741A (en) * 2020-01-15 2020-05-22 中国地质调查局发展研究中心 Rock classification constraint inheritance loss method of rock recognition deep learning model
CN111309919B (en) * 2020-03-23 2024-04-16 智者四海(北京)技术有限公司 Text classification model system and training method thereof
CN112241493A (en) * 2020-10-28 2021-01-19 浙江集享电子商务有限公司 Commodity retrieval method and device, computer equipment and storage medium
CN113011529B (en) * 2021-04-28 2024-05-07 平安科技(深圳)有限公司 Training method, training device, training equipment and training equipment for text classification model and readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110209817A (en) * 2019-05-31 2019-09-06 安徽省泰岳祥升软件有限公司 Training method and device of text processing model and text processing method
CN112801720A (en) * 2021-04-12 2021-05-14 连连(杭州)信息技术有限公司 Method and device for generating shop category identification model and identifying shop category

Also Published As

Publication number Publication date
CN113806537A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN111368548A (en) Semantic recognition method and device, electronic equipment and computer-readable storage medium
CN113806537B (en) Commodity category classification method and device, equipment, medium and product thereof
CN108334891A (en) A kind of Task intent classifier method and device
CN113177124A (en) Vertical domain knowledge graph construction method and system
CN114912433A (en) Text level multi-label classification method and device, electronic equipment and storage medium
CN113850201A (en) Cross-modal commodity classification method and device, equipment, medium and product thereof
CN113792786A (en) Automatic commodity object classification method and device, equipment, medium and product thereof
CN114186056A (en) Commodity label labeling method and device, equipment, medium and product thereof
CN115731425A (en) Commodity classification method, commodity classification device, commodity classification equipment and commodity classification medium
CN113962224A (en) Named entity recognition method and device, equipment, medium and product thereof
CN115018549A (en) Method for generating advertisement file, device, equipment, medium and product thereof
CN116521906B (en) Meta description generation method, device, equipment and medium thereof
Liu et al. Bagging based ensemble transfer learning
CN114238524B (en) Satellite frequency-orbit data information extraction method based on enhanced sample model
CN114663155A (en) Advertisement putting and selecting method and device, equipment, medium and product thereof
CN116976920A (en) Commodity shopping guide method and device, equipment and medium thereof
CN115099854A (en) Method for creating advertisement file, device, equipment, medium and product thereof
CN113806536B (en) Text classification method and device, equipment, medium and product thereof
CN111079376B (en) Data labeling method, device, medium and electronic equipment
CN116823404A (en) Commodity combination recommendation method, device, equipment and medium thereof
CN116975743A (en) Industry information classification method, device, computer equipment and storage medium
CN116521843A (en) Intelligent customer service method facing user, device, equipment and medium thereof
CN116029793A (en) Commodity recommendation method, device, equipment and medium thereof
CN115563280A (en) Commodity label labeling method and device, equipment and medium thereof
CN115700579A (en) Advertisement text generation method and device, equipment and medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant