CN112241493A - Commodity retrieval method and device, computer equipment and storage medium - Google Patents

Commodity retrieval method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112241493A
CN112241493A CN202011169418.7A CN202011169418A CN112241493A CN 112241493 A CN112241493 A CN 112241493A CN 202011169418 A CN202011169418 A CN 202011169418A CN 112241493 A CN112241493 A CN 112241493A
Authority
CN
China
Prior art keywords
training
retrieval
text
commodity
probability set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011169418.7A
Other languages
Chinese (zh)
Inventor
黄李
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yunchuang Share Network Technology Co ltd
Zhejiang Jixiang E Commerce Co ltd
Original Assignee
Hangzhou Yunchuang Share Network Technology Co ltd
Zhejiang Jixiang E Commerce Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yunchuang Share Network Technology Co ltd, Zhejiang Jixiang E Commerce Co ltd filed Critical Hangzhou Yunchuang Share Network Technology Co ltd
Priority to CN202011169418.7A priority Critical patent/CN112241493A/en
Publication of CN112241493A publication Critical patent/CN112241493A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a commodity retrieval method, wherein the commodity retrieval method comprises the following steps: acquiring a retrieval text; inputting the retrieval text into a first neural network to obtain a first probability set; the first probability set comprises the probability that the recognition result belongs to each commodity category in the primary commodity categories, and the recognition result corresponds to the retrieval text; inputting the retrieval text and the first probability set into a second neural network to obtain a second probability set; the second probability set comprises the probability that the recognition result belongs to each commodity category in the secondary commodity categories; the recognition result corresponds to the retrieval text; the commodity category of the secondary commodity category belongs to the commodity category of the primary commodity category; and acquiring an identification result based on the second probability set, solving the problem of low accuracy of commodity retrieval by inputting search words, and realizing the technical effect of accurately retrieving commodities according to the search words.

Description

Commodity retrieval method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of natural language processing, and in particular, to a method and apparatus for retrieving a commodity, a computer device, and a storage medium.
Background
In a search scenario, in order to improve the relevance between a user search term (query) and a commodity, the intention of the user search term needs to be analyzed, and a category related to the search term needs to be identified in advance, and the step is called category prediction. The essence of category prediction is actually a short text classification problem in the Natural Language Processing (NLP) field, where the input short text is a search word, and the output recognition result is a leaf category (leaf _ category) of a commodity.
The development of short text classification goes through three main stages, rule-based, shallow model-based and deep learning-based. Before 2010, the text classification model based on shallow learning was dominant. Shallow learning means models based on vocabulary statistics, such as na iotave bayes, K-nearest neighbors, and support vector machines. This method has a dramatic improvement in the accuracy and robustness dimensions compared to the first rule-based methods. However, these methods still require feature extraction and functional design, and are not an end-to-end process. Furthermore, they often ignore information at the textual level of meaning, which makes it difficult to learn semantic information about words. Since 2010, text classification has gradually changed from shallow learning models to deep learning models. Compared with a method based on shallow learning, the deep learning method avoids manual design rules and functions, and automatically provides a semantically meaningful representation form for text mining. Therefore, most text classification research works are based on deep learning models.
The current commonly used short text Classification model is a Convolutional Neural Network (CNN) model for text Classification, and the main principle of the model is that a search word is input into the Convolutional Neural network based on a pre-trained Convolutional Neural network model to obtain an embedding (word embedding pair converts a text into a vector) expression of the search word, useful text information is extracted through the Convolutional Neural network to obtain a vector expression of a Sentence, and finally the most relevant category of the current search word is predicted through a softmax function. However, in practical application in the current e-commerce field, due to the fact that the number of commodity categories in the category prediction task is large, the problem that training samples and categories of each category are unevenly distributed exists; in addition, because the classification of the e-commerce commodity vocabulary categories is independent respectively, namely, the commodity categories are not mutually connected, the neural network model does not consider the hierarchical structure relationship in the commodity categories during training, and the prediction accuracy of directly inputting search words to retrieve commodities is low.
At present, no effective solution is provided for the problem of low accuracy of commodity retrieval by inputting search terms in the related technology.
Disclosure of Invention
The embodiment of the application provides a commodity retrieval method, a commodity retrieval device, computer equipment and a storage medium, and aims to at least solve the problem that the accuracy of commodity retrieval by inputting search terms in the related technology is low.
In a first aspect, an embodiment of the present application provides a commodity retrieval method, including:
acquiring a retrieval text;
inputting the retrieval text into a first neural network to obtain a first probability set; the first probability set comprises the probability that the recognition result belongs to each commodity category in the primary commodity categories, and the recognition result corresponds to the retrieval text;
inputting the retrieval text and the first probability set into a second neural network to obtain a second probability set; the second probability set comprises the probability that the recognition result belongs to each commodity category in the secondary commodity categories; the recognition result corresponds to the retrieval text; the commodity category of the secondary commodity category belongs to the commodity category of the primary commodity category;
and acquiring a recognition result based on the second probability set.
In one embodiment, said entering said retrieved text into a first neural network, resulting in a first set of probabilities comprises: and establishing a multi-level commodity category model according to the E-commerce thesaurus, wherein the multi-level commodity category model at least comprises the first-level commodity category and the second-level commodity category.
In one embodiment, the entering the retrieved text into the first neural network comprises: acquiring a training retrieval text and a first training probability set corresponding to the training retrieval text, wherein the first training probability set is the probability that a training identification result belongs to each commodity category of the primary commodity category, and the training identification result corresponds to the training retrieval text; establishing a training set based on the training retrieval text and the first training probability set; and training a first neural network model based on the training set to obtain a trained first neural network.
In one embodiment, the entering the search text into a first neural network, resulting in a first set of probabilities includes: inputting the search text into the first neural network; converting the retrieval text into a program language according to the E-commerce word stock to obtain a first retrieval vector; and inputting the first retrieval vector into a first logistic regression model to obtain the first probability set, wherein the first logistic regression model is established based on the primary commodity category.
In one embodiment, said inputting said retrieved text and said first set of probabilities into a second neural network comprises: acquiring the training retrieval text, a first training probability set and a corresponding second training probability set; the second training probability set is the probability that a training recognition result belongs to each commodity category of the secondary commodity categories, and the training recognition result corresponds to the training retrieval text; establishing a training set based on the training retrieval text, the first training probability set and the second training probability set; training a second neural network model based on the training set to obtain a trained second neural network.
In one embodiment, the inputting the search text and the first probability set into a second neural network to obtain a second probability set comprises: inputting the search text into the second neural network; converting the retrieval text into a program language according to the E-commerce word stock to obtain a first retrieval vector; generating a second retrieval vector according to the first retrieval vector and the first probability set; and inputting the second retrieval vector into a second logistic regression model to obtain the second probability set, wherein the second logistic regression model is established based on the secondary commodity category.
In one embodiment, the obtaining the recognition result based on the second probability set includes: and acquiring the commodity category with the highest probability in the secondary commodity categories as a search result and outputting the search result based on the second probability set.
In a second aspect, an embodiment of the present application provides a commodity search device, including:
an acquisition module: the method comprises the steps of obtaining a retrieval text;
a first processing module: the first neural network is used for inputting the retrieval text into the first neural network to obtain a first probability set; the first probability set comprises the probability that the recognition result belongs to each commodity category in the primary commodity categories, and the recognition result corresponds to the retrieval text;
a second processing module: the first probability set is used for inputting the search text and the first probability set into a second neural network to obtain a second probability set; the second probability set comprises the probability that the recognition result belongs to each commodity category in the secondary commodity categories; the recognition result corresponds to the retrieval text; the commodity category of the secondary commodity category belongs to the commodity category of the primary commodity category;
an output module: and obtaining a recognition result based on the second probability set.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the article retrieval method according to the first aspect is implemented.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method for retrieving an article as described in the first aspect.
Compared with the related art, the commodity retrieval method provided by the embodiment of the application obtains the retrieval text; inputting the retrieval text into a first neural network to obtain a first probability set; the first probability set comprises the probability that the recognition result belongs to each commodity category in the primary commodity categories, and the recognition result corresponds to the retrieval text; inputting the retrieval text and the first probability set into a second neural network to obtain a second probability set; the second probability set comprises the probability that the recognition result belongs to each commodity category in the secondary commodity categories; the recognition result corresponds to the retrieval text; the commodity category of the secondary commodity category belongs to the commodity category of the primary commodity category; and acquiring an identification result based on the second probability set, and correcting the identification result of the retrieval text in the neural network of the current level according to the identification result of the neural network of the previous level by establishing the cascade neural network, so that the problem of low accuracy of commodity retrieval by inputting the search word is solved, and the technical effect of accurately retrieving the commodity according to the search word is realized.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a commodity retrieval method according to an embodiment of the present application;
FIG. 2 is an E-commerce domain category tree of a merchandise retrieval method according to an embodiment of the present application;
FIG. 3 is a diagram illustrating a multi-stage neural network model of a merchandise retrieval method according to a preferred embodiment of the present application;
fig. 4 is a block diagram showing the structure of an article search device according to an embodiment of the present application;
fig. 5 is a hardware structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Natural language processing, NLP for short, is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between humans and computers using natural language. Among them, short text classification is one research direction of natural language processing. With the development of the internet e-commerce industry, in actual commercial activities, the search of commodities by user search words is essentially a short text classification problem in natural language processing. However, because of the huge commodity category of the e-commerce thesaurus, the problem that the search is inaccurate when the search is directly performed according to the search word exists, and based on this, the embodiment provides a commodity search method.
The embodiment also provides a commodity retrieval method. Fig. 1 is a flowchart of a commodity retrieval method according to an embodiment of the present application, and as shown in fig. 1, the flowchart includes the following steps:
step S101, acquiring a search text.
Specifically, a user inputs a search word on a search interface of an e-commerce website or e-commerce software, for example, if the search word input on the search interface by the user is autumn white washcloth, the search text acquired by the system is autumn white washcloth.
And S102, inputting the retrieval text into a first neural network to obtain a first probability set.
The first probability set comprises the probability that the recognition result belongs to each commodity category in the primary commodity categories, and the recognition result corresponds to the retrieval text; specifically, the primary commodity category includes a plurality of commodity categories such as clothing, home appliances, and the like. And inputting the retrieval text into the first neural network to obtain a set of probabilities that the identification result belongs to each commodity category in the first commodity category. For example, the input search text, that is, the search word is "autumn white sweater", the first-level commodity category includes clothing and household appliances, the "autumn white sweater" is input into the first neural network, the obtained first probability set is [0.9,0.1], the first probability set is vector expression of the search result, the meaning is that the search text "autumn white sweater" is input into the first neural network, the probability that the searched recognition result is clothing is 0.9, and the probability that the recognition result is household appliances is 0.1. The examples herein are only used to illustrate the meaning of the first probability set, and actually the first-level commodity category may include several tens, several hundreds, or several thousands of commodity categories, which is not limited by the present invention.
In one embodiment, said entering said retrieved text into a first neural network, resulting in a first set of probabilities comprises: and establishing a multi-level commodity category model according to the E-commerce thesaurus, wherein the multi-level commodity category model at least comprises the first-level commodity category and the second-level commodity category. Specifically, under the E-commerce application scene, the commodity categories can be classified based on the proprietary E-commerce lexicon, and a multi-level commodity category model is established. Fig. 2 is an e-commerce domain category tree of the commodity retrieval method according to an embodiment of the present application, and as shown in fig. 2, the e-commerce domain category tree is a multi-level commodity category model, and includes three commodity category sets, i.e., a primary category, a secondary category, and a leaf category. The first category includes: clothing, home appliances, and the like; secondary categories include: women's jacket, everybody's electric appliances and life, etc.; the leaf categories include: female long-sleeved T-shirts, female fur, female jeans, female casual pants, air conditioners, flat televisions, dehumidifiers and the like. The establishment of the E-commerce field category tree refers to the mutual relation among the commodity categories, and establishes a three-level commodity category set for all E-commerce commodity categories according to the subordination relation among the categories. For example, the category of female long-sleeved T-shirts in the leafage category belongs to the category of female tops in the secondary category; the women's tops category in the secondary category is subordinate to the apparel category in the primary category. In addition, the number of categories in the primary category is less than the number of categories in the secondary category, which is less than the number of categories in the leaf category. The category set included in the e-commerce field category tree in this embodiment may be three levels or more, but includes at least two levels, that is, a first category set and a second category set.
In one embodiment, the entering the retrieved text into the first neural network comprises: acquiring a training retrieval text and a first training probability set corresponding to the training retrieval text, wherein the first training probability set is the probability that a training identification result belongs to each commodity category of the primary commodity category, and the training identification result corresponds to the training retrieval text; establishing a training set based on the training retrieval text and the first training probability set; and training a first neural network model based on the training set to obtain a trained first neural network. Specifically, before the search term is input into the first neural network, the first neural network should be trained. For example, the training search text includes "white autumn sweater", "blue autumn sweater", "white winter sweater", and the like, the first-level commodity category includes "clothing, home appliance", the first training probability set corresponding to the training search text is [0.9,0.1], the first training probability set indicates that the probability that the recognition result obtained by inputting the search text is clothing is 0.9, and the probability that the recognition result obtained by inputting the search text is home appliance is 0.1. And establishing a training set according to the training retrieval text and the first training probability set based on the class of the first-class commodity, wherein the data of the training set is priori knowledge, namely the training retrieval text and the first training probability set are both determined data, and the corresponding relation is also a preset determined corresponding relation. And training the first neural network model in advance according to the training set to obtain the first neural network.
In one embodiment, the entering the search text into a first neural network, resulting in a first set of probabilities includes: inputting the search text into the first neural network; converting the retrieval text into a program language according to the E-commerce word stock to obtain a first retrieval vector; and inputting the first retrieval vector into a first logistic regression model to obtain the first probability set, wherein the first logistic regression model is established based on the primary commodity category.
Specifically, the search text is input into a first neural network, and the search text is firstly segmented according to the E-commerce proprietary word bank. The word segmentation is a step in the short text classification process, the invention adopts a word segmentation method based on character string matching, namely, an E-commerce proprietary word bank is used as a dictionary, a retrieval text is segmented into a plurality of parts, each part corresponds to the dictionary, if the word is in the dictionary, the word segmentation is successful, otherwise, the word segmentation and matching are continued until the word segmentation is successful, and in the word segmentation process, a segmentation rule and a matching sequence can be set according to requirements. The present invention is not limited to the word method. For example, if the search text is [ autumn white sweater ], and the e-commerce thesaurus includes [ autumn ], [ white ] and [ sweater ], the search text is split into [ autumn ], [ white ], [ sweater ]. Carrying out random vector initialization according to word segmentation results [ autumn ], [ white ] and [ sweater ] of the search text to generate embedding expressions of the search text, wherein the embedding expressions of [ autumn ] are [0.1,0.2 and 1.9 ]; the embedding expression of [ white ] is [0.7,0.4,0.9 ]; the embedding expression of [ sweater ] is [0.45,0.32,1.59 ]; that is, the embedding expression of the search text [ autumn, white, sweater ] is [ [0.1,0.2,1.9], [0.7,0.4,0.9], [0.45,0.32,1.59] ]. The embedding layer sequentially enters a convolutional layer, a maximum pooling layer and a full-link layer, so that semantic features in the retrieval text are extracted, and a sentence vector of the retrieval text, namely a first retrieval vector, is obtained. The first search vector is generated for converting the search text input by the user into a natural language recognizable by a machine, and the sentence vector is generated according to the embedding expression of the search text, so that the semantic meaning of the sentence vector is closer to the semantic meaning of the search text input by the user. The number of dimensions of the sentence vector is the same as the number of dimensions of each word, for example, embedding expression of the search text [ autumn, white, and sanitary clothing ] in the embodiment is [ [0.1,0.2,1.9], [0.7,0.4,0.9], [0.45,0.32,1.59] ], and the first search vector [0.42,0.33,1.55] is obtained by processing the convolution layer, the maximum pooling layer, and the full connection layer, and is input into the softmax model to obtain the first probability set, that is, the search text passes through the first neural network to obtain the set of possibilities that the recognition result is each category in the first-level commodity category. The Softmax logistic regression model is a generalization of the logistic regression model on the multi-classification problem, and can map the probability in the semantic vector to each category in the first category set and perform normalization. In addition, in the present embodiment, a three-dimensional vector is used for the first search vector, and in practical applications, in order to express the search text more accurately, a sentence vector including more dimensions may be generated as the first search vector, preferably a 128-dimensional vector, which is not limited in the present application.
And S103, inputting the search text and the first probability set into a second neural network to obtain a second probability set.
The second probability set comprises the probability that the recognition result belongs to each commodity category in the secondary commodity categories; the recognition result corresponds to the retrieval text; the commodity category of the secondary commodity category belongs to the commodity category of the primary commodity category; specifically, the number of the commodity categories in the secondary commodity category is larger than that of the first commodity category, and the secondary commodity category has an affiliation. Inputting the search text and the target vector output by the first neural network model into a second neural network to obtain a set of probabilities that the recognition result belongs to each commodity category in a second commodity category, such as 'autumn white sweater', a first commodity category comprising 'clothes and household appliances', and a second commodity category comprising 'jacket, trousers, refrigerator and desk lamp', inputting the search text 'autumn white sweater' and a first probability set [0.9,0.1], outputting a second probability set [0.7,0.1,0.1,0.1], wherein the first probability set is a vector expression of the recognition result in the second neural network, the probability that the recognition result is the jacket is 0.7, the probability that the recognition result is the trousers is 0.1, the probability that the recognition result is the refrigerator is 0.1, and the probability that the recognition result is the desk lamp is 0.1.
In one embodiment, said inputting said retrieved text and said first set of probabilities into a second neural network comprises: acquiring the training retrieval text, a first training probability set and a corresponding second training probability set; the second training probability set is the probability that a training recognition result belongs to each commodity category of the secondary commodity categories, and the training recognition result corresponds to the training retrieval text and the first training probability set; establishing a training set based on the training retrieval text, the first training probability set and the second training probability set; training a second neural network model based on the training set to obtain a trained second neural network. Before entering the search terms into the first neural network, the second neural network should be trained. For example, the training search text is taken to include [ white autumn sweater ], [ blue autumn sweater ], [ white winter sweater ] and the like, and the first training probability set [0.9,0.1], [0.8,0.2] and the like. The secondary commodity category comprises [ jacket, trousers, refrigerator, desk lamp ], a second training probability set [0.7,0.1,0.1,0.1] corresponding to the training retrieval text, wherein the second training probability set represents that the probability of the input retrieval text for the recognition result of the jacket is 0.7, the probability of the recognition result of the trousers is 0.1, the probability of the recognition result of the refrigerator is 0.1, and the probability of the recognition result of the desk lamp is 0.1. And establishing a training set according to the training retrieval text, the first training probability set and a second training probability set based on the secondary commodity category, and training a second neural network model in advance according to the training set to obtain a second neural network.
In one embodiment, the inputting the search text and the first probability set into a second neural network to obtain a second probability set comprises: inputting the search text into the second neural network; converting the retrieval text into a program language according to the E-commerce word stock to obtain a first retrieval vector; generating a second retrieval vector according to the first retrieval vector and the first probability set; and inputting the second retrieval vector into a second logistic regression model to obtain the second probability set, wherein the second logistic regression model is established based on the secondary commodity category.
Specifically, the same search text as the search text input by the first neural network is input into the second neural network, the search text is segmented based on the E-commerce proprietary word bank, and the embedding expression of the search text is obtained, and if different segmentation dictionaries are adopted in the segmentation process, different embedding can be generated for the same search text. In this embodiment, the first word segmentation process and the second word segmentation process perform word segmentation according to the private word stock of the e-commerce, the obtained embedding expression is the same as the embedding expression obtained by inputting the search text into the first neural network model, the embedding sequentially enters the convolutional layer, the maximum pooling layer and the full-link layer, so as to extract semantic features in the search text, the embedding expression of the search text is optimized to obtain a first search vector of the search text, the first search vector is spliced with the first probability set, that is, the first search vector of the search text and a target vector obtained by the first neural network are spliced to obtain a modified semantic vector, that is, a second search vector. And inputting the second retrieval vector into softmax software to obtain a second probability set. For example: the retrieval text is [ autumn white sweater ], the primary commodity category comprises 'clothing and household appliances', the secondary commodity category comprises [ jacket, trousers, refrigerator and desk lamp ], the embedding of the retrieval text is [0.3,0.3,0.4] obtained through word segmentation process and random vector initialization, the embedding [0.3,0.3,0.4] sequentially enters the rolling layer, the maximum pooling layer and the full connection layer, and the first retrieval vector of the retrieval text [ autumn white sweater ] is [0.2,0.3,0.5 ]; the first probability set is that the target vector output by the first neural network is [0.9,0.1 ]; and splicing the first retrieval vector with the target vector to obtain a second retrieval vector [0.2,0.3,0.5,0.9,0.1], inputting the second retrieval vector into a softmax model to obtain a second probability set [0.97,0.01,0.01 ], wherein the second probability set represents that the retrieval text [ autumn white sweater ] and the first probability set are input into a second neural network, and the probability of the identification result of the jacket is 0.97, the probability of the identification result of the trousers is 0.01, the probability of the identification result of the refrigerator is 0.01, and the probability of the identification result of the desk lamp is 0.01.
And step S104, acquiring a recognition result based on the second probability set.
Specifically, the most accurate commodity category in the secondary commodity categories is obtained according to the probability of each element in the second probability set output by the second neural network.
In one embodiment, the obtaining the recognition result based on the second probability set includes: and acquiring the commodity category with the highest probability in the secondary commodity categories as a search result and outputting the search result based on the second probability set. For example: a second probability set (0.97,0.01,0.01,0.01) which represents that the search text 'autumn white sweater' and the first probability set are input into a second neural network, the probability of the identification result being the jacket is 0.97, the probability of the identification result being the trousers is 0.01, the probability of the identification result being the refrigerator is 0.01, and the probability of the identification result being the desk lamp is 0.01; and outputting the jacket as an identification result according to the commodity category corresponding to the maximum probability value of 0.97.
Through the steps, the multi-level commodity category model is set through the E-commerce field category tree. In the category tree. The number of the commodity categories of each stage is gradually increased, so that the problem of unbalanced distribution of training samples and training categories in the traditional neural network caused by more commodity categories is solved; and moreover, the commodity categories are classified according to the affiliation relationship among the commodity categories, the relationship among the commodity categories is fully considered, and the problem that the association relationship among the commodity categories is not considered in the related technology is solved. According to the commodity retrieval method, the multi-level neural network model is established according to the category tree, the semantic vector generated when the retrieval text is input at the next level can be corrected by the prediction result of each level of sub-model, the problem of inaccurate identification result caused by a point-to-point retrieval mode of the retrieval text and the commodity category in the related technology is solved, and the technical effect of accurately retrieving the commodity category according to the retrieval text is achieved.
The embodiments of the present application are described and illustrated below by means of preferred embodiments.
FIG. 3 is a diagram illustrating a multi-stage neural network model of a merchandise retrieval method according to a preferred embodiment of the present application; as shown in fig. 3, the multi-level neural network model includes a primary category sub-model, a secondary category sub-model, and a leaf category sub-model, each of which includes an embedding layer, a convolutional layer, a max pooling layer, a full link layer, and a softmax regression layer.
When a query is input by a user, the query passes through a primary category sub-model, a secondary category sub-model and a leaf category sub-model in sequence, the model structures of the three sub-models are basically identical, but the vector expression of the query is different from the final output data.
Aiming at a first primary category submodel, the input of the model is a user search word, firstly, a word segmentation result of a target level is obtained through word segmentation of a proprietary E-commerce thesaurus, then random word vector initialization is carried out on the word segmentation result to obtain vector expression embeading of the search word, convolution operations of windows 2, 3 and 4 are respectively carried out on the embeading expression, then a first sentence vector of a retrieval text is obtained through full-connection layer operation after the convolution result of each window is maximally pooled, finally softmax logistic regression is carried out on the first sentence vector to obtain a first target vector of an output result of the first primary category submodel, namely a prediction result of each primary category.
For the second secondary category submodel, the input of the model is the same as the user search word, the model structure is completely the same before the step of softmax, but the models are mutually independent, namely the category submodels are established based on different training sets, finally, the first sentence vector of the search word is spliced with the output first target vector of the primary category submodel to obtain a new expression second sentence vector of the sentence vector, and then softmax is carried out to obtain the output result second target vector of the secondary category submodel, namely the prediction result of each secondary commodity category.
And finally, splicing the first sentence vector of the search word with the output second target vector of the secondary category submodel to obtain a new expression third sentence vector of the sentence vector, and obtaining the output result leaf commodity category of the leaf category submodel after passing through softmax, namely the leaf category result most related to the search word. The method comprises the steps that after the training of the whole model is finished, a multi-level neural network model is used, useful information of a previous layer of model is continuously transmitted from a model for predicting less classified targets, and finally a needed leaf category prediction result is obtained.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The present embodiment further provides a commodity retrieval device, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the commodity retrieval device is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram showing a configuration of an article search device according to an embodiment of the present application, and as shown in fig. 4, the device includes:
the acquisition module 10: for obtaining the search text.
The first processing module 20: the first neural network is used for inputting the retrieval text into the first neural network to obtain a first probability set; the first set of probabilities includes a probability that the recognition result belongs to each of the primary categories of merchandise, the recognition result corresponding to the search text.
The second processing module 30: the first probability set is used for inputting the search text and the first probability set into a second neural network to obtain a second probability set; the second probability set comprises the probability that the recognition result belongs to each commodity category in the secondary commodity categories; the recognition result corresponds to the retrieval text; the commodity category of the secondary commodity category belongs to the commodity category of the primary commodity category.
The output module 40: and obtaining a recognition result based on the second probability set.
The first processing module 20 is further configured to establish a multi-level commodity category model according to the e-commerce thesaurus, where the multi-level commodity category model at least includes the first-level commodity category and the second-level commodity category.
The first processing module 20 is further configured to obtain a training retrieval text and a first training probability set corresponding to the training retrieval text, where the first training probability set is a probability that a training recognition result belongs to each commodity category of the first-class commodity category, and the training recognition result corresponds to the training retrieval text; establishing a training set based on the training retrieval text and the first training probability set; and training a first neural network model based on the training set to obtain a trained first neural network.
The first processing module 20 is further configured to input the search text into the first neural network; converting the retrieval text into a program language according to the E-commerce word stock to obtain a first retrieval vector; and inputting the first retrieval vector into a first logistic regression model to obtain the first probability set, wherein the first logistic regression model is established based on the primary commodity category.
The second processing module 30 is further configured to obtain the training retrieval text, the first training probability set, and a corresponding second training probability set; the second training probability set is the probability that a training recognition result belongs to each commodity category of the secondary commodity categories, and the training recognition result corresponds to the training retrieval text; establishing a training set based on the training retrieval text, the first training probability set and the second training probability set; training a second neural network model based on the training set to obtain a trained second neural network.
The second processing module 30 is further configured to input the search text into the second neural network; converting the retrieval text into a program language according to the E-commerce word stock to obtain a first retrieval vector; generating a second retrieval vector according to the first retrieval vector and the first probability set; and inputting the second retrieval vector into a second logistic regression model to obtain the second probability set, wherein the second logistic regression model is established based on the secondary commodity category.
And the output module 40 acquires the commodity category with the highest probability in the secondary commodity categories as a search result based on the second probability set and outputs the search result.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In addition, the commodity retrieval method described in conjunction with fig. 1 in the embodiment of the present application may be implemented by a computer device. Fig. 5 is a hardware structure diagram of a computer device according to an embodiment of the present application.
The computer device may comprise a processor 51 and a memory 52 in which computer program instructions are stored.
Specifically, the processor 51 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 52 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 52 may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, magnetic tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 52 may include removable or non-removable (or fixed) media, where appropriate. The memory 52 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 52 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, Memory 52 includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (EAROM), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
The memory 52 may be used to store or cache various data files that need to be processed and/or used for communication, as well as possible computer program instructions executed by the processor 51.
The processor 51 may implement any one of the article retrieval methods in the above embodiments by reading and executing computer program instructions stored in the memory 82.
In some of these embodiments, the computer device may also include a communication interface 53 and a bus 50. As shown in fig. 5, the processor 51, the memory 52, and the communication interface 53 are connected via the bus 50 to complete mutual communication.
The communication interface 53 is used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present application. The communication port 53 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
Bus 50 comprises hardware, software, or both coupling the components of the computer device to each other. Bus 50 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 50 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (Front Side Bus), an FSB (FSB), a Hyper Transport (HT) Interconnect, an ISA (ISA) Bus, an InfiniBand (InfiniBand) Interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a microchannel Architecture (MCA) Bus, a PCI (Peripheral Component Interconnect) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video Electronics Bus (audio Association) Bus, abbreviated VLB) bus or other suitable bus or a combination of two or more of these. Bus 50 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
The computer device may execute the commodity retrieval method in the embodiment of the present application based on the acquired computer program instruction, thereby implementing the commodity retrieval method described in conjunction with fig. 1.
In addition, in combination with the commodity retrieval method in the above embodiments, the embodiments of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the article retrieval methods in the above embodiments.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for retrieving a commodity, comprising:
acquiring a retrieval text;
inputting the retrieval text into a first neural network to obtain a first probability set; the first probability set comprises the probability that the recognition result belongs to each commodity category in the primary commodity categories, and the recognition result corresponds to the retrieval text;
inputting the retrieval text and the first probability set into a second neural network to obtain a second probability set; the second probability set comprises the probability that the recognition result belongs to each commodity category in the secondary commodity categories; the recognition result corresponds to the retrieval text; the commodity category of the secondary commodity category belongs to the commodity category of the primary commodity category;
and acquiring a recognition result based on the second probability set.
2. The method of claim 1, wherein entering the search text into the first neural network to obtain the first set of probabilities comprises:
and establishing a multi-level commodity category model according to the E-commerce thesaurus, wherein the multi-level commodity category model at least comprises the first-level commodity category and the second-level commodity category.
3. The method of claim 2, wherein the entering the search text into the first neural network comprises:
acquiring a training retrieval text and a first training probability set corresponding to the training retrieval text, wherein the first training probability set is the probability that a training identification result belongs to each commodity category of the primary commodity category, and the training identification result corresponds to the training retrieval text;
establishing a training set based on the training retrieval text and the first training probability set;
and training a first neural network model based on the training set to obtain a trained first neural network.
4. The method of claim 3, wherein said entering the search text into a first neural network to obtain a first set of probabilities comprises:
inputting the search text into the first neural network;
converting the retrieval text into a program language according to the E-commerce word stock to obtain a first retrieval vector;
and inputting the first retrieval vector into a first logistic regression model to obtain the first probability set, wherein the first logistic regression model is established based on the primary commodity category.
5. The method of claim 4, wherein said entering the search text and the first set of probabilities into a second neural network comprises:
acquiring the training retrieval text, a first training probability set and a corresponding second training probability set; the second training probability set is the probability that a training recognition result belongs to each commodity category of the secondary commodity categories, and the training recognition result corresponds to the training retrieval text;
establishing a training set based on the training retrieval text, the first training probability set and the second training probability set;
training a second neural network model based on the training set to obtain a trained second neural network.
6. The method of claim 5, wherein inputting the search text and the first set of probabilities into a second neural network to obtain a second set of probabilities comprises:
inputting the search text into the second neural network;
converting the retrieval text into a program language according to the E-commerce word stock to obtain a first retrieval vector;
generating a second retrieval vector according to the first retrieval vector and the first probability set;
and inputting the second retrieval vector into a second logistic regression model to obtain the second probability set, wherein the second logistic regression model is established based on the secondary commodity category.
7. The method for retrieving commodities according to claim 1, wherein said obtaining a recognition result based on said second probability set comprises:
and acquiring the commodity category with the highest probability in the secondary commodity categories as a search result and outputting the search result based on the second probability set.
8. An article search device, comprising:
an acquisition module: the method comprises the steps of obtaining a retrieval text;
a first processing module: the first neural network is used for inputting the retrieval text into the first neural network to obtain a first probability set; the first probability set comprises the probability that the recognition result belongs to each commodity category in the primary commodity categories, and the recognition result corresponds to the retrieval text;
a second processing module: the first probability set is used for inputting the search text and the first probability set into a second neural network to obtain a second probability set; the second probability set comprises the probability that the recognition result belongs to each commodity category in the secondary commodity categories; the recognition result corresponds to the retrieval text; the commodity category of the secondary commodity category belongs to the commodity category of the primary commodity category;
an output module: and obtaining a recognition result based on the second probability set.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the item retrieval method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing an article retrieval method according to any one of claims 1 to 7.
CN202011169418.7A 2020-10-28 2020-10-28 Commodity retrieval method and device, computer equipment and storage medium Pending CN112241493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011169418.7A CN112241493A (en) 2020-10-28 2020-10-28 Commodity retrieval method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011169418.7A CN112241493A (en) 2020-10-28 2020-10-28 Commodity retrieval method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112241493A true CN112241493A (en) 2021-01-19

Family

ID=74169972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011169418.7A Pending CN112241493A (en) 2020-10-28 2020-10-28 Commodity retrieval method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112241493A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965998A (en) * 2021-02-04 2021-06-15 成都健数科技有限公司 Compound database establishing and searching method and system
CN113468414A (en) * 2021-06-07 2021-10-01 广州华多网络科技有限公司 Commodity searching method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424296A (en) * 2013-09-02 2015-03-18 阿里巴巴集团控股有限公司 Query word classifying method and query word classifying device
CN110287317A (en) * 2019-06-06 2019-09-27 昆明理工大学 A kind of level multi-tag medical care problem classification method based on CNN-DBN
CN110737801A (en) * 2019-10-14 2020-01-31 腾讯科技(深圳)有限公司 Content classification method and device, computer equipment and storage medium
CN111353838A (en) * 2018-12-21 2020-06-30 北京京东尚科信息技术有限公司 Method and device for automatically checking commodity category

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424296A (en) * 2013-09-02 2015-03-18 阿里巴巴集团控股有限公司 Query word classifying method and query word classifying device
CN111353838A (en) * 2018-12-21 2020-06-30 北京京东尚科信息技术有限公司 Method and device for automatically checking commodity category
CN110287317A (en) * 2019-06-06 2019-09-27 昆明理工大学 A kind of level multi-tag medical care problem classification method based on CNN-DBN
CN110737801A (en) * 2019-10-14 2020-01-31 腾讯科技(深圳)有限公司 Content classification method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965998A (en) * 2021-02-04 2021-06-15 成都健数科技有限公司 Compound database establishing and searching method and system
CN113468414A (en) * 2021-06-07 2021-10-01 广州华多网络科技有限公司 Commodity searching method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108197109B (en) Multi-language analysis method and device based on natural language processing
US11704487B2 (en) System and method for fashion attributes extraction
WO2020082569A1 (en) Text classification method, apparatus, computer device and storage medium
CN111190997B (en) Question-answering system implementation method using neural network and machine learning ordering algorithm
WO2020186627A1 (en) Public opinion polarity prediction method and apparatus, computer device, and storage medium
WO2019019860A1 (en) Method and apparatus for training classification model
WO2020147409A1 (en) Text classification method and apparatus, computer device, and storage medium
Shuang et al. A sentiment information Collector–Extractor architecture based neural network for sentiment analysis
CN113434858B (en) Malicious software family classification method based on disassembly code structure and semantic features
CN105760363B (en) Word sense disambiguation method and device for text file
KR20200096133A (en) Method, apparatus and device for constructing data model, and medium
CN107391565B (en) Matching method of cross-language hierarchical classification system based on topic model
CN112241493A (en) Commodity retrieval method and device, computer equipment and storage medium
CN109271624B (en) Target word determination method, device and storage medium
Yuan et al. Graph attention network with memory fusion for aspect-level sentiment analysis
CN112989208A (en) Information recommendation method and device, electronic equipment and storage medium
CN112925904A (en) Lightweight text classification method based on Tucker decomposition
CN113609847B (en) Information extraction method, device, electronic equipment and storage medium
CN115098857A (en) Visual malicious software classification method and device
Lin et al. An adaptive masked attention mechanism to act on the local text in a global context for aspect-based sentiment analysis
Parvathi et al. Identifying relevant text from text document using deep learning
CN112528653B (en) Short text entity recognition method and system
CN112948573B (en) Text label extraction method, device, equipment and computer storage medium
Agathangelou et al. Mining domain-specific dictionaries of opinion words
CN109446321B (en) Text classification method, text classification device, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210119