CN110321484B - Product recommendation method and device - Google Patents
Product recommendation method and device Download PDFInfo
- Publication number
- CN110321484B CN110321484B CN201910530886.3A CN201910530886A CN110321484B CN 110321484 B CN110321484 B CN 110321484B CN 201910530886 A CN201910530886 A CN 201910530886A CN 110321484 B CN110321484 B CN 110321484B
- Authority
- CN
- China
- Prior art keywords
- vector
- self
- neural network
- connection matrix
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 239000013598 vector Substances 0.000 claims abstract description 345
- 239000011159 matrix material Substances 0.000 claims abstract description 158
- 238000013528 artificial neural network Methods 0.000 claims abstract description 118
- 238000012549 training Methods 0.000 claims abstract description 70
- 230000006870 function Effects 0.000 claims description 99
- 238000004891 communication Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 5
- 238000010276 construction Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 238000005457 optimization Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Item recommendations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The embodiment of the invention discloses a product recommendation method and device, wherein the method comprises the following steps: the server constructs a graph network according to the product information and the user information, wherein the graph network comprises N nodes, and N is a positive integer; the server acquires a first connection matrix of the graph network according to the connection relation of N nodes in the graph network; the server performs unsupervised training on the self-coding neural network by using a first connection matrix, and a first loss function is constructed; the server performs supervised training on the self-coding neural network by using the first connection matrix, and constructs a second loss function according to the minimization of the difference of dense feature vectors of different row vectors of the first connection matrix; the server combines the first loss function and the second loss function to construct an objective function, and optimizes the objective function; the server recommends products to the user based on the trained self-encoding neural network. The embodiment of the invention can infer the potential and deep requirements of customers, and improves the poor user experience and recommendation efficiency.
Description
Technical Field
The invention relates to the technical field of computer technology and signal processing, in particular to a product recommendation method and device.
Background
Conventional product recommendation algorithms are basically all recommendation algorithms based on "collaborative filtering", e.g. collaborative filtering based on customers or collaborative filtering based on products. In the field of product recommendation, the conventional method has a fatal defect that recommended contents lack diversification in practical application, and potential and deeper demands of customers cannot be inferred. However, with the development of technology, the product has higher and higher requirements for diversification and customization, and the lack of diversification and customization of recommendation reduces user experience and has low recommendation efficiency. This is because conventional methods are based only on easily understood surface features (e.g., gender, preferences, etc.), which, while helping people interpret the results, do not allow deep product features to be exploited.
In summary, the current product recommendation algorithm cannot infer the potential and deep demands of customers, which may result in poor user experience and low recommendation efficiency.
Disclosure of Invention
The embodiment of the invention provides a product recommendation method and device, which can infer potential and deep demands of customers and improve poor user experience and recommendation efficiency.
In a first aspect, an embodiment of the present invention provides a product recommendation method, including the steps of: the server constructs a graph network according to the product information and the user information, wherein the graph network comprises N nodes, and N is a positive integer; the server acquires a first connection matrix of the graph network according to the connection relation of N nodes in the graph network; the server performs unsupervised training on the self-coding neural network by using a first connection matrix, and constructs a first loss function based on minimization of difference between row vectors of the first connection matrix and reconstruction vectors of row vectors of the first connection matrix; the server performs supervised training on the self-coding neural network by using a first connection matrix, input variables of the supervised training are an ith row vector and a jth row vector of the first connection matrix, output vectors of the supervised training are a dense feature vector of the ith row vector and a dense feature vector of the jth row vector, a second loss function is constructed according to the minimization of the difference between the dense feature vector of the ith row vector and the dense feature vector of the ith row vector, and i and j are positive integers which are more than or equal to 1 and less than or equal to N; the server combines the first loss function and the second loss function to construct an objective function, and optimizes the objective function; the server recommends products to the user based on the trained self-encoding neural network.
In the embodiment of the invention, a server constructs a graph network according to product information and user information, and acquires a connection matrix of the graph network by utilizing the connection relation of N nodes in the graph network; the server performs unsupervised training on the self-coding neural network by using a connection matrix and builds a first loss function; the server performs supervised training on the self-coding neural network by using the connection matrix, and constructs a second loss function according to the minimization of the difference of dense feature vectors of different row vectors in the connection matrix; the server builds and optimizes an objective function by combining the first loss function and the second loss function, so that the optimization of the self-coding neural network is realized; and recommending the product to the user by the server through the optimized self-coding neural network. According to the embodiment of the invention, the self-coding neural network is trained by utilizing the connection relation between the user and the product, the deep features of the user and the product can be extracted through the trained self-coding neural network, the most suitable product can be accurately recommended to the user, and the user experience and the recommendation efficiency are improved.
Optionally, if the ith node and the jth node in the graph network are connected, the value of the element of the ith row and the jth column of the first connection matrix is 1, and if the ith node and the jth node in the graph network are not connected, the value of the element of the ith row and the jth column of the first connection matrix is 0, and the self-coding neural network comprises an input layer, K-1 hidden layers and an output layer; the input variable of the unsupervised training is a row vector of a first connection matrix, the output vector of the unsupervised training is a reconstruction vector of the row vector of the first connection matrix, the server uses the first connection matrix to perform the unsupervised training on the self-coding neural network, and a first loss function is constructed based on the minimization of the difference between the row vector of the first connection matrix and the reconstruction vector of the row vector of the first connection matrix, and the method comprises the following steps:
The server uses the row vector X of the first connection matrix X i Inputting the self-coding neural network, extracting row vector x through the self-coding neural network i Is a dense feature vector y of (1) (K) i Dense feature vector y (K) i Represented as follows, y (K) i =W*x i +b,y (K) i The output vector is the output vector of the output layer of the self-coding neural network, W is the weight matrix of the self-coding neural network, and b is the bias vector of the self-coding neural network; the server takes the dense feature vector as a reverse input variable of the self-coding neural network, and the reverse output vector of the self-coding neural network is a reconstruction vector of the row vectorReconstruction vector +.>Expressed as follows>The server is based on row vector x i And reconstruction vector +.>Is to construct a first loss function ψ 1 First loss function ψ 1 Expressed as follows>Wherein the symbol->Representing the two norms of the vector.
Optionally, constructing the first loss function based on minimization of differences between the row vectors of the first connection matrix and the reconstructed vector of the row vectors of the first connection matrix includes:
the server determines a second connection matrix S according to the first connection matrix X, if the element X of the ith row and the jth column of the first connection matrix i,j With a value of 0, then element s of the ith row and jth column of the second connection matrix i,j If the value of (1) is x, the element of the ith row and jth column of the first connection matrix i,j With a value of 1, then element s of the ith row and jth column of the second connection matrix i,j A is a super parameter greater than 1; the server is based on row vector x i Reconstructing vectorsIs to construct a first loss function ψ 1 First loss function ψ 1 Expressed as follows>
Optionally, the server encodes the nerve using the first connection matrixThe network performs supervised training, the input variables of the supervised training are the ith row vector and the jth row vector of the first connection matrix, the output vectors of the supervised training are the dense feature vector of the ith row vector and the dense feature vector of the jth row vector, and a second loss function is constructed according to the minimization of the difference between the dense feature vector of the ith row vector and the dense feature vector of the ith row vector, and the method comprises the following steps: the server uses the row vector x of the first connection matrix i Inputting the self-coding neural network, extracting row vector x through the self-coding neural network i Is a dense feature vector y of (1) (K) i The method comprises the steps of carrying out a first treatment on the surface of the The server uses the row vector x of the first connection matrix j Inputting the self-coding neural network, extracting row vector x through the self-coding neural network j Is a dense feature vector y of (1) (K) j The method comprises the steps of carrying out a first treatment on the surface of the The server is based on the dense feature vector y (K) i And dense feature vector y (K) j Is to construct a second loss function ψ 2 A second loss function ψ 2 The expression is as follows,
alternatively, the objective function ψ is expressed as follows,wherein->Greater than 0 and less than 1.
Optionally, the server recommends products to the user based on the trained self-coding neural network, including: the server inputs the row vector of the first user in the first connection matrix into the self-coding neural network; the server extracts dense feature vectors of the row vectors of the first user by using the self-coding neural network; the server extracts dense feature vectors of M target products in the first connection matrix by using the self-coding neural network; the server calculates the difference degree of the dense feature vectors of the M target products and the dense feature vectors of the row vectors of the first user, and the product with the smallest difference degree with the dense feature vectors of the row vectors of the first user in the M target products is the product recommended to the first user.
Optionally, calculating the degree of difference of the dense feature vectors of the M products and the dense feature vectors of the row vectors of the first user includes: calculating the difference degree C of the dense feature vector of the first product and the dense feature vector of the row vector of the first user in the M target products th ,C th The expression is as follows,the row vector of the first product is the t-th row vector of the first connection matrix, and the row vector of the first user is the h-th row vector of the first connection matrix.
In a second aspect, an embodiment of the present invention further provides a product recommendation device, where the product recommendation device can implement the beneficial effects of the product recommendation method described in the first aspect. The functions of the device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module corresponding to the above functions.
Optionally, the device comprises a construction unit, an acquisition unit, an unsupervised training unit, a supervised training unit, an optimization unit and a recommendation unit. Wherein,,
the construction unit comprises: the method is used for constructing a graph network according to the product information and the user information, wherein the graph network comprises N nodes, and N is a positive integer.
And the acquisition unit is used for acquiring a first connection matrix of the graph network according to the connection relation of the N nodes in the graph network.
And the unsupervised training unit is used for carrying out unsupervised training on the self-coding neural network by utilizing the first connection matrix, and constructing a first loss function based on the minimization of the difference between the row vector of the first connection matrix and the reconstruction vector of the row vector of the first connection matrix.
The supervised training unit is used for performing supervised training on the self-coding neural network by using the first connection matrix, input variables of the supervised training are an ith row vector and a jth row vector of the first connection matrix, output vectors of the supervised training are a dense feature vector of the ith row vector and a dense feature vector of the jth row vector, and a second loss function is constructed according to the minimization of the difference between the dense feature vector of the ith row vector and the dense feature vector of the ith row vector, wherein i and j are positive integers which are larger than or equal to 1 and smaller than or equal to N.
And the optimizing unit is used for constructing an objective function by combining the first loss function and the second loss function and optimizing the objective function.
And the recommending unit is used for recommending products to the user based on the trained self-coding neural network.
Optionally, if the ith node and the jth node in the graph network are connected, the value of the element of the ith row and the jth column of the first connection matrix is 1, and if the ith node and the jth node in the graph network are not connected, the value of the element of the ith row and the jth column of the first connection matrix is 0, and the self-coding neural network comprises an input layer, K-1 hidden layers and an output layer; the input variable of the unsupervised training is a row vector of the first connection matrix, the output vector of the unsupervised training is a reconstructed vector of the row vector of the first connection matrix, and the unsupervised training unit comprises:
an unsupervised forward training unit for training the row vector X of the first connection matrix X i Inputting the self-coding neural network, extracting row vector x through the self-coding neural network i Is a dense feature vector y of (1) (K) i Dense feature vector y (K) i Represented as follows, y (K) i =W*x i +b,y (K) i The output vector of the output layer of the self-coding neural network is W, the weight matrix of the self-coding neural network is W, and the bias vector of the self-coding neural network is b.
An unsupervised reverse training unit for taking the dense feature vector as a reverse input variable of the self-coding neural network, and the reverse output vector of the self-coding neural network as a reconstruction vector of the row vectorReconstruction vector +.>The expression is as follows,/>
a first construction unit for based on the row vector x i Reconstructing vectorsIs to construct a first loss function ψ 1 First loss function ψ 1 Expressed as follows>Wherein the symbol->Representing the two norms of the vector.
Optionally, the first building unit includes:
a determining unit for determining a second connection matrix S based on the first connection matrix X if the element X of the ith row and jth column of the first connection matrix i,j With a value of 0, then element s of the ith row and jth column of the second connection matrix i,j If the value of (1) is x, the element of the ith row and jth column of the first connection matrix i,j With a value of 1, then element s of the ith row and jth column of the second connection matrix i,j A is a super parameter greater than 1.
A second construction unit for based on the row vector x i Reconstructing vectorsIs to construct a first loss function ψ 1 First loss function ψ 1 Expressed as follows>
Optionally, the supervised training unit comprises:
a first supervised training unit for extracting the first connection matrix from the server Is the row vector x of (2) i Inputting the self-coding neural network, extracting row vector x through the self-coding neural network i Is a dense feature vector y of (1) (K) i 。
A supervised second training unit for training the row vectors x of the first connection matrix j Inputting the self-coding neural network, extracting row vector x through the self-coding neural network j Is a dense feature vector y of (1) (K) j 。
A third construction unit for generating a dense feature vector y (K) i And dense feature vector y (K) j Is to construct a second loss function ψ 2 A second loss function ψ 2 The expression is as follows,
alternatively, the objective function ψ is expressed as follows,wherein->Greater than 0 and less than 1.
Optionally, the recommending unit includes:
and the input unit is used for inputting the row vector of the first user in the first connection matrix into the self-coding neural network.
A first extraction unit for extracting dense feature vectors of row vectors of the first user using the self-encoding neural network.
And the second extraction unit is used for extracting dense feature vectors of M target products in the first connection matrix by using the self-coding neural network.
The computing unit is used for computing the difference degree of the dense feature vectors of the M target products and the dense feature vectors of the row vectors of the first user, and the product with the smallest difference degree with the dense feature vectors of the row vectors of the first user in the M target products is the product recommended to the first user.
Alternatively, the computer unit may,specifically for calculating the degree of difference C of the dense feature vector of the first product and the dense feature vector of the row vector of the first user among M target products th ,C th The expression is as follows,the row vector of the first product is the t-th row vector of the first connection matrix, and the row vector of the first user is the h-th row vector of the first connection matrix.
In a third aspect, an embodiment of the present invention further provides a server, where the server can implement the beneficial effects of the product recommendation method described in the first aspect. The functions of the server can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module corresponding to the above functions. The server comprises a memory for storing a computer program supporting the server to perform the method described above, the computer program comprising program instructions, a processor for controlling and managing the actions of the server in accordance with the program instructions, and a transceiver for supporting the communication of the server with other communication devices.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having instructions stored thereon that, when executed on a processor, cause the processor to perform the product recommendation method described in the first aspect above.
Drawings
The drawings that accompany the embodiments or the prior art description can be briefly described as follows.
Fig. 1 is a schematic structural diagram of a server according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a product recommendation method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a graph network according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a product recommendation device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention. It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, the terms "first," "second," and "third," etc. are used for distinguishing between different objects and not for describing a particular sequential order.
It is noted that the terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be noted that, the server in the embodiment of the present application may be a conventional server capable of bearing services and guaranteeing service capabilities, or may be a terminal device having a processor, a hard disk, a memory, and a system bus structure and capable of bearing services and guaranteeing service capabilities. The embodiments of the present application are not particularly limited.
Referring to fig. 1, fig. 1 is a schematic hardware structure of a server 100 according to an embodiment of the present invention, where the server 100 includes: a memory 101, a transceiver 102, and a processor 103 coupled with the memory 101 and the transceiver 102. The memory 101 is used for storing a computer program comprising program instructions, the processor 103 is used for executing the program instructions stored in the memory 101, and the transceiver 102 is used for communicating with other devices under the control of the processor 103. The processor 103, when executing the instructions, may perform a dual learning based speech recognition and speech synthesis method according to the program instructions.
The processor 103 may be a central processing unit (central processing unit, CPU), a general purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application-specific integrated circuit, ASIC), a field programmable gate array (field programmable gate array, FPGA), or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with the disclosure of embodiments of the invention. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, a combination of a DSP and a microprocessor, and so forth. The transceiver 102 may be a communication interface, a transceiver circuit, etc., where the communication interface is generally referred to and may include one or more interfaces, such as an interface between a server and a terminal.
Optionally, the server 100 may also include a bus 104. Wherein the memory 101, transceiver 102, and processor 103 may be interconnected by a bus 104; bus 104 may be a peripheral component interconnect standard (English: peripheral component interconnect; PCI) bus or an extended industry Standard architecture (English: extended industry standard architecture; EISA) bus, among others. The bus 104 may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, only one thick line is shown in fig. 1, but not only one bus or one type of bus.
In addition to the memory 101, the transceiver 102, the processor 103, and the bus 104 shown in fig. 1, the server 100 in the embodiment may further include other hardware according to the actual functions of the server, which will not be described herein.
In the above operating environment, the embodiment of the present invention provides a product recommendation method as shown in fig. 2. Referring to fig. 2, the product recommendation method includes:
s401, the server constructs a graph network according to the product information and the user information, wherein the graph network comprises N nodes, and N is a positive integer.
Optionally, the server constructs a graph network according to the product, the product feature, the user and the user feature, and the product node, the product feature node, the user node and the user feature node form N nodes of the graph network, where N is a positive integer.
For example, a graph network is constructed according to K users of the sales company, one or more user characteristics of each of the K users, L products, one or more product characteristics of each of the L products, and a relationship between the K users and the L products, each of the K users, each of the K products, and each of the K products being a node in the graph network. If the K users have M user characteristics in total, the M products have G product characteristics in total, and the network includes N nodes in total, n=k+l+m+g. If the ith user in the K users uses the jth product in the L products, the user node of the ith user, the user feature node corresponding to the user feature of the ith user, the product node corresponding to the jth product and the product feature node corresponding to the product feature of the jth product are connected in the graph network. For example, as shown in fig. 3, which is a schematic structural diagram of a medium graph network provided by the present invention, as shown in the drawing, a user 1 node is connected to a product 1 node, and a user feature node of the user 1 is also connected to a product feature node of the product 1. It should be noted that fig. 3 is only a schematic diagram provided by the present invention, and in practical cases, values of K, L, M and G are both far greater than 1.
In the embodiment of the present invention, other forming manners of the graph network may also exist, which is not specifically limited in the present invention.
S202, the server acquires a first connection matrix of the graph network according to the connection relation of N nodes in the graph network.
In the context of graph theory, an object in a graph may be converted into a vector that covers the key information of this object on the graph. A method of converting an object (e.g., a node on a graph network) into a vector is also referred to as "representation learning".
Optionally, if the ith node and the jth node in the graph network are connected, a value of an element in the jth column of the ith row of the first connection matrix is 1, and if the ith node and the jth node in the graph network are not connected, a value of an element in the jth column of the ith row of the first connection matrix is 0, and both i and j are positive integers greater than or equal to 1 and less than or equal to N.
It should be noted that, the first connection matrix is used for representing global information of the graph network, and the ith row vector of the first connection matrix represents local information of the ith node of the graph network, that is, a connection relationship between the ith node in the graph network and other N-1 nodes in the graph network.
It can be understood that the graph network includes N nodes, and the first connection matrix is an N-order matrix.
S203, the server performs unsupervised training on the self-coding neural network by using the first connection matrix, and constructs a first loss function based on the minimization of the difference between the row vector of the first connection matrix and the reconstruction vector of the row vector of the first connection matrix.
Optionally, the self-encoding neural network includes an input layer, K-1 hidden layers, and an output layer. The row vector x of the first connection matrix i The input vector of the first hidden layer as the input variable is y (1) i ,y (1) i =w (1) *x i +b (1) The first hidden layer comprises d 1 Hidden units, y (1) i Is d 1 Dimension vector, w (1) B is the weight coefficient of the first layer neural network (1) Is the bias weight of the first layer neural network. The output vector of the i-th hidden layer is y (i) i ,y (i) i =w (i) *y (i-1) i +b (i) ,y (i) i Is d i Dimension vector, w (i) B is the weight coefficient of the i-th layer neural network (i) Is the bias weight of the i-th layer neural network. The output vector of the output layer of the self-coding neural network is y (K) i ,y (K) i Represented as follows, y (K) i =W*x i +b,y (K) i Is d K Dimension vector d K Far less than N, y (K) i Is the input vector x i W is the weight matrix of the self-encoding neural network, and b is the bias vector of the self-encoding neural network.
Optionally, the input variable of the unsupervised training is a row vector of the first connection matrix, the output vector of the unsupervised training is a reconstructed vector of the row vector of the first connection matrix, the server uses the first connection matrix to perform the unsupervised training on the self-coding neural network, and the first loss function is constructed based on the minimization of the difference between the row vector of the first connection matrix and the reconstructed vector, and specifically may include the following steps:
step one: the server uses the row vector X of the first connection matrix X i Inputting the self-coding neural network, extracting row vector x through the self-coding neural network i Is a dense feature vector y of (1) (K) i Dense feature vector y (K) i Represented as follows, y (K) i =W*x i And +b, W is a weight matrix of the self-coding neural network, and b is a bias vector of the self-coding neural network.
Step two: the server takes the dense feature vector as a reverse input variable of the self-coding neural network, and the reverse output vector of the self-coding neural network is a reconstruction vector of the row vectorReconstruction vector +.>The expression is as follows,
step three: the server is based on row vector x i Reconstructing vectorsIs to construct a first loss function ψ 1 First loss function ψ 1 Expressed as follows>Wherein the symbol->Representing the two norms of the vector.
Optionally, the first connection matrix is a very sparse matrix, and for the first reduction of the sparsity of the connection matrix, the server determines the second connection matrix S according to the first connection matrix X. The second connection matrix S can be constructed as follows: if the element x of the ith row and jth column of the first connection matrix i,j With a value of 0, then element s of the ith row and jth column of the second connection matrix i,j If the value of (1) is x, the element of the ith row and jth column of the first connection matrix i,j With a value of 1, then element s of the ith row and jth column of the second connection matrix i,j Wherein a is a super parameter greater than 1.
For example, the parameter a takes a value of 10.
It will be appreciated that since the connection matrix is a very sparse matrix, the number of zero values in the connection matrix is much greater than the number of non-zero values, which results in elements of the reconstruction matrix tending to zero, and therefore the sparsity of the connection matrix can be reduced by constructing the second connection matrix as described above.
Optionally, the server is based on row vector x i Reconstructing vectorsIs to construct a first loss function ψ 1 First loss function ψ 1 Expressed as follows>Wherein s is i The ith row vector representing the second connection matrix, the symbol +.>The F norm of the expression.
S204, the server performs supervised training on the self-coding neural network by using the first connection matrix, input variables of the supervised training are an ith row vector and a jth row vector of the first connection matrix, output vectors of the supervised training are a dense feature vector of the ith row vector and a dense feature vector of the jth row vector, and a second loss function is constructed according to minimization of difference between the dense feature vector of the ith row vector and the dense feature vector of the ith row vector.
Optionally, the server performs supervised training on the self-coding neural network by using the first connection matrix, and constructs a second loss function according to the minimization of the difference between the dense eigenvector of the ith row vector and the dense eigenvector of the ith row vector, which specifically includes the following steps:
step one: the server uses the row vector x of the first connection matrix i Inputting the self-coding neural network, extracting row vector x through the self-coding neural network i Is a dense feature vector y of (1) (K) i 。
Step two: the server uses the row vector x of the first connection matrix j Inputting the self-coding neural network, extracting row vector x through the self-coding neural network j Is a dense feature vector y of (1) (K) j 。
Step three: the server is based on the dense feature vector y (K) i And dense feature vector y (K) j Is to construct a second loss function ψ 2 A second loss function ψ 2 The expression is as follows,
it will be appreciated that from an implicit characteristic point of view, the similarity of two nodes connected in the graph network should be as large as possible, i.e. the degree of difference should be as small as possible.
S205, the server combines the first loss function and the second loss function to construct an objective function, and optimizes the objective function.
In the embodiment of the invention, the server combines the first loss function and the second loss function to construct the objective function of the product recommendation model, and optimizes the objective function, thereby realizing the optimization of the product recommendation model.
Alternatively, the objective function ψ is expressed as follows,the parameters to be optimized of the objective function are W and b. Wherein (1)>As a weight factor, ++>The value range is 0 to 1,/for>For adjusting the specific gravity between an unsupervised loss function and a supervised loss function.
Optionally, the objective function is iteratively optimized by using a gradient descent algorithm, so as to obtain an optimal solution of the parameter to be optimized.
Optionally, performing iterative optimization on the objective function by using a gradient descent algorithm specifically includes the following steps:
step one: representing (W, b) as R, initializing the parameter R to be optimized (0) 。
Step two: calculating an objective function ψ about a parameter R to be optimized (i) Is a deviator of (a)Deducing->The expression is as follows:
wherein,,
wherein Y is a matrix composed of dense eigenvectors corresponding to N row vectors in the first connection matrix.
Step three: the parameters R to be optimized are iteratively updated by,wherein beta is a gain factor and satisfies 0.ltoreq.beta.ltoreq.1.
And repeating the second step and the third step until the objective function converges.
It should be noted that, in the embodiment of the present invention, other algorithms may be used to optimize the objective function, which is not limited in particular by the present invention.
S206, recommending products to the user by the server based on the trained self-coding neural network.
Optionally, the server recommends a product to the user based on the trained product recommendation model based on the self-coding neural network, which specifically includes the following steps: the server inputs the row vector of the first user in the first connection matrix into the self-coding neural network; the server extracts dense feature vectors of the row vectors of the first user by using the self-coding neural network; the server extracts dense feature vectors of M target products in the first connection matrix by using the self-coding neural network; the server calculates the difference degree of the dense feature vectors of the M target products and the dense feature vectors of the row vectors of the first user, and the product with the smallest difference degree with the dense feature vectors of the row vectors of the first user in the M target products is the product recommended to the first user.
It can be appreciated that, from the implicit characteristic point of view, a product with the smallest degree of difference from the dense characteristic vector of the row vector of the first user among the M target products is most suitable for the first user.
Optionally, calculating the degree of difference between the dense feature vectors of the M products and the dense feature vectors of the row vectors of the first user may specifically include: calculating the difference degree C of the dense feature vector of the first product and the dense feature vector of the row vector of the first user in the M target products th ,C th The expression is as follows,the row vector of the first product is the t-th row vector of the first connection matrix, and the row vector of the first user is the h-th row vector of the first connection matrix.
It will be appreciated that obtaining deep features of a user or product in a high-dimensional space in a deep learning manner may select the most suitable product for the user. Noise in the features can be removed through deep learning of the self-coding neural network, the features of each dimension are precisely quantized, and key information is reserved.
In the embodiment of the invention, a server constructs a graph network according to product information and user information, and acquires a connection matrix of the graph network by utilizing the connection relation of N nodes in the graph network; the server performs unsupervised training on the self-coding neural network by using a connection matrix and builds a first loss function; the server performs supervised training on the self-coding neural network by using the connection matrix, and constructs a second loss function according to the minimization of the difference of dense feature vectors of different row vectors in the connection matrix; the server builds and optimizes an objective function by combining the first loss function and the second loss function, so that optimization of a product recommendation model based on the self-coding neural network is realized; and the server can recommend the product to the user by using the optimized product recommendation model. According to the embodiment of the invention, the product recommendation model based on the self-coding neural network is trained by utilizing the connection relation between the user and the product, deep features of the user and the product can be extracted through the trained product recommendation model, the most suitable product can be accurately recommended to the user, and the user experience and recommendation efficiency are improved.
The embodiment of the invention also provides a product recommending device which can realize the beneficial effects of the product recommending method shown in fig. 2. The functions of the device can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes at least one module corresponding to the above functions.
Referring to fig. 4, fig. 4 is a block diagram illustrating a product recommendation apparatus 400 according to an embodiment of the present invention, where the apparatus includes: a construction unit 401, an acquisition unit 402, an unsupervised training unit 403, a supervised training unit 404, an optimization unit 405, and a recommendation unit 406.
Construction unit 401: the method is used for constructing a graph network according to the product information and the user information, wherein the graph network comprises N nodes, and N is a positive integer.
An obtaining unit 402, configured to obtain a first connection matrix of the graph network according to connection relationships of N nodes in the graph network.
An unsupervised training unit 403, configured to perform unsupervised training on the self-encoding neural network by using the first connection matrix, and construct a first loss function based on minimization of a difference between a row vector of the first connection matrix and a reconstructed vector of the row vector of the first connection matrix.
The supervised training unit 404 is configured to perform supervised training on the self-encoded neural network by using the first connection matrix, input variables of the supervised training are an ith row vector and a jth row vector of the first connection matrix, output vectors of the supervised training are a dense feature vector of the ith row vector and a dense feature vector of the jth row vector, and construct a second loss function according to minimization of a difference between the dense feature vector of the ith row vector and the dense feature vector of the ith row vector, where i and j are positive integers greater than or equal to 1 and less than or equal to N.
An optimizing unit 405, configured to construct an objective function by combining the first loss function and the second loss function, and optimize the objective function.
And a recommending unit 406, configured to recommend a product to the user based on the trained self-coding neural network.
Optionally, if the ith node and the jth node in the graph network are connected, the value of the element of the ith row and the jth column of the first connection matrix is 1, and if the ith node and the jth node in the graph network are not connected, the value of the element of the ith row and the jth column of the first connection matrix is 0, and the self-coding neural network comprises an input layer, K-1 hidden layers and an output layer; the input variable of the unsupervised training is a row vector of the first connection matrix, the output vector of the unsupervised training is a reconstructed vector of the row vector of the first connection matrix, and the unsupervised training unit 403 includes:
An unsupervised forward training unit for training the row vector X of the first connection matrix X i Inputting the self-coding neural network, extracting row vector x through the self-coding neural network i Is a dense feature vector y of (1) (K) i Dense feature vector y (K) i Represented as follows, y (K) i =W*x i +b,y (K) i The output vector of the output layer of the self-coding neural network is W, the weight matrix of the self-coding neural network is W, and the bias vector of the self-coding neural network is b.
An unsupervised reverse training unit for taking the dense feature vector as a reverse input variable of the self-coding neural network, and the reverse output vector of the self-coding neural network as a reconstruction vector of the row vectorReconstruction vector +.>The expression is as follows,
a first construction unit for based on the row vector x i Reconstructing vectorsIs to construct a first loss function ψ 1 First loss function ψ 1 Expressed as follows>Wherein the symbol->Representing the two norms of the vector.
Optionally, the first building unit includes:
a determining unit for determining a second connection matrix S based on the first connection matrix X if the element X of the ith row and jth column of the first connection matrix i,j With a value of 0, then element s of the ith row and jth column of the second connection matrix i,j If the value of (1) is x, the element of the ith row and jth column of the first connection matrix i,j With a value of 1, then element s of the ith row and jth column of the second connection matrix i,j A is a super parameter greater than 1.
A second construction unit for based on the row vector x i Reconstructing vectorsIs to construct a first loss function ψ 1 First loss function ψ 1 Expressed as follows>
Optionally, the supervised training unit 404 includes:
a first supervised training unit for extracting row vectors x of the first connection matrix to be used by the server i Inputting the self-coding neural network, extracting row vector x through the self-coding neural network i Is a dense feature vector y of (1) (K) i 。
A supervised second training unit for training the row vectors x of the first connection matrix j Inputting the self-coding neural network, extracting row vector x through the self-coding neural network j Is a dense feature vector y of (1) (K) j 。
A third construction unit for generating a dense feature vector y (K) i And dense feature vector y (K) j Is to construct a second loss function ψ 2 A second loss function ψ 2 The expression is as follows,
alternatively, the objective function ψ is expressed as follows,wherein->Greater than 0 and less than 1.
Optionally, the recommendation unit 406 includes:
and the input unit is used for inputting the row vector of the first user in the first connection matrix into the self-coding neural network.
A first extraction unit for extracting dense feature vectors of row vectors of the first user using the self-encoding neural network.
And the second extraction unit is used for extracting dense feature vectors of M target products in the first connection matrix by using the self-coding neural network.
The computing unit is used for computing the difference degree of the dense feature vectors of the M target products and the dense feature vectors of the row vectors of the first user, and the product with the smallest difference degree with the dense feature vectors of the row vectors of the first user in the M target products is the product recommended to the first user.
Optionally, the calculating unit is specifically configured to calculate a degree of difference C between the dense feature vector of the first product and the dense feature vector of the row vector of the first user among the M target products th ,C th The expression is as follows,the row vector of the first product is the t-th row vector of the first connection matrix, and the row vector of the first user is the h-th row vector of the first connection matrix.
The steps of a method or algorithm described in connection with the present disclosure may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (english: random access memory; RAM), flash memory, read Only Memory (ROM), erasable programmable read only memory (english: erasable programmable ROM; EPROM), electrically erasable programmable read only memory (english: electrically EPROM; EEPROM), registers, hard disk, a removable disk, a compact disc read only memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a network device. The processor and the storage medium may reside as discrete components in a network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing detailed description of the embodiments of the present invention further illustrates the purposes, technical solutions and advantageous effects of the embodiments of the present invention, and it should be understood that the foregoing description is only a specific implementation of the embodiments of the present invention, and is not intended to limit the scope of the embodiments of the present invention, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the embodiments of the present invention should be included in the scope of the embodiments of the present invention.
Claims (6)
1. A method of product recommendation, the method comprising:
the method comprises the steps that a server constructs a graph network according to product information and user information, wherein the graph network comprises N nodes, and N is a positive integer;
the server acquires a first connection matrix of the graph network according to the connection relation of N nodes in the graph network;
the server performs unsupervised training on the self-coding neural network by using the first connection matrix, and constructs a first loss function based on minimization of difference between a row vector of the first connection matrix and a reconstruction vector of the row vector;
the server performs supervised training on the self-coding neural network by using the first connection matrix, wherein input variables of the supervised training are an ith row vector and a jth row vector of the first connection matrix, output vectors of the supervised training are a dense feature vector of the ith row vector and a dense feature vector of the jth row vector, and a second loss function is constructed according to the minimization of the difference between the dense feature vector of the ith row vector and the dense feature vector of the jth row vector, and i and j are positive integers which are larger than or equal to 1 and smaller than or equal to N;
The server combines the first loss function and the second loss function to construct an objective function, and optimizes the objective function;
the server recommends products to the user based on the trained self-coding neural network;
if the ith node and the jth node in the graph network are connected, the value of the element of the ith row and the jth column of the first connection matrix is 1, and if the ith node and the jth node in the graph network are not connected, the value of the element of the ith row and the jth column of the first connection matrix is 0, and the self-coding neural network comprises an input layer, a K-1 hidden layer and an output layer; the input variable of the unsupervised training is a row vector of the first connection matrix, the output vector of the unsupervised training is a reconstruction vector of the row vector of the first connection matrix, the server uses the first connection matrix to perform unsupervised training on the self-coding neural network, and a first loss function is constructed based on minimization of difference between the row vector of the first connection matrix and the reconstruction vector of the row vector of the first connection matrix, and the method comprises the following steps:
the server connects the first connection matrixIs >Inputting the self-encoding neural network, extracting the row vector through the self-encoding neural network>Dense feature vector->Said dense feature vector->The expression is as follows,,/>for the output vector of the output layer of the self-encoding neural network,>weight matrix for the self-encoding neural network, < > for>A bias vector for the self-encoding neural network;
the server takes the dense feature vector as a reverse input variable of the self-coding neural network, and the reverse output vector of the self-coding neural network is thatReconstruction vector of the row vectorSaid reconstruction vector->The expression is as follows,;
the server is based on the row vectorAnd the reconstruction vector->Is to construct said first loss function +.>Said first loss function->Expressed as follows>Wherein the symbol->Representation vector->Is a binary norm of (2);
the server performs supervised training on the self-encoding neural network by using the first connection matrix, wherein input variables of the supervised training are an ith row vector and a jth row vector of the first connection matrix, output vectors of the supervised training are a dense feature vector of the ith row vector and a dense feature vector of the jth row vector, and a second loss function is constructed according to the minimization of the difference between the dense feature vector of the ith row vector and the dense feature vector of the jth row vector, and the method comprises the following steps:
The server sets the row vector of the first connection matrixInputting the self-encoding neural network, extracting the row vector through the self-encoding neural network>Dense feature vector->;
The server sets the row vector of the first connection matrixInputting the self-encoding neural network, extracting the row vector through the self-encoding neural network>Dense feature vector->;
The server generates a dense feature vector based on the dataAnd the dense feature vector->Is to construct said second loss function +.>Said second loss function->Expressed as follows>;
2. The method of claim 1, wherein the server recommends products to a user based on the trained self-encoding neural network, comprising:
the server inputs the row vector of the first user in the first connection matrix into the self-coding neural network;
the server extracting dense feature vectors of row vectors of the first user using the self-encoding neural network;
the server extracts dense feature vectors of M target products in the first connection matrix by using the self-coding neural network;
The server calculates the difference degree of the dense feature vectors of the M target products and the dense feature vectors of the row vectors of the first user, and the product with the smallest difference degree with the dense feature vectors of the row vectors of the first user in the M target products is the product recommended to the first user.
3. The method of claim 2, wherein the calculating a degree of difference of the dense feature vectors of the M products and the dense feature vectors of the row vectors of the first user comprises:
calculating the difference degree of the dense feature vector of the first product in the M target products and the dense feature vector of the row vector of the first user,/>Expressed as follows>The row vector of the first product is the t-th row vector of the first connection matrix, and the row vector of the first user is the h-th row vector of the first connection matrix.
4. A product recommendation device, characterized in that it is adapted to perform the method according to any one of claims 1 to 3.
5. A server comprising a processor, a communication device and a memory, the processor, the communication device and the memory being interconnected, wherein the memory is for storing application code, the processor being configured to invoke the application code to perform the method of any of claims 1 to 3.
6. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910530886.3A CN110321484B (en) | 2019-06-18 | 2019-06-18 | Product recommendation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910530886.3A CN110321484B (en) | 2019-06-18 | 2019-06-18 | Product recommendation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110321484A CN110321484A (en) | 2019-10-11 |
CN110321484B true CN110321484B (en) | 2023-06-02 |
Family
ID=68121120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910530886.3A Active CN110321484B (en) | 2019-06-18 | 2019-06-18 | Product recommendation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110321484B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144976B (en) * | 2019-12-10 | 2022-08-09 | 支付宝(杭州)信息技术有限公司 | Training method and device for recommendation model |
CN113283921A (en) * | 2020-02-19 | 2021-08-20 | 华为技术有限公司 | Business data processing method and device and cloud server |
CN111368205B (en) * | 2020-03-09 | 2021-04-06 | 腾讯科技(深圳)有限公司 | Data recommendation method and device, computer equipment and storage medium |
CN113393281A (en) * | 2020-03-11 | 2021-09-14 | 北京沃东天骏信息技术有限公司 | Method and device for processing request |
CN111400594B (en) * | 2020-03-13 | 2023-05-09 | 喜丈(上海)网络科技有限公司 | Information vector determining method, device, equipment and storage medium |
CN111459781B (en) * | 2020-04-01 | 2022-08-26 | 厦门美图之家科技有限公司 | Behavior relation determination method and device, computer equipment and readable storage medium |
CN111931076B (en) * | 2020-09-22 | 2021-02-09 | 平安国际智慧城市科技股份有限公司 | Method and device for carrying out relationship recommendation based on authorized directed graph and computer equipment |
CN114864108B (en) * | 2022-07-05 | 2022-09-09 | 深圳市圆道妙医科技有限公司 | Processing method and processing system for syndrome and prescription matching data |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015180397A1 (en) * | 2014-05-31 | 2015-12-03 | 华为技术有限公司 | Method and device for recognizing data category based on deep neural network |
WO2018098598A1 (en) * | 2016-12-02 | 2018-06-07 | Stack Fintech Inc. | Digital banking platform and architecture |
CN108694232A (en) * | 2018-04-26 | 2018-10-23 | 武汉大学 | A kind of socialization recommendation method based on trusting relationship feature learning |
-
2019
- 2019-06-18 CN CN201910530886.3A patent/CN110321484B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015180397A1 (en) * | 2014-05-31 | 2015-12-03 | 华为技术有限公司 | Method and device for recognizing data category based on deep neural network |
WO2018098598A1 (en) * | 2016-12-02 | 2018-06-07 | Stack Fintech Inc. | Digital banking platform and architecture |
CN108694232A (en) * | 2018-04-26 | 2018-10-23 | 武汉大学 | A kind of socialization recommendation method based on trusting relationship feature learning |
Also Published As
Publication number | Publication date |
---|---|
CN110321484A (en) | 2019-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110321484B (en) | Product recommendation method and device | |
CN107301225B (en) | Short text classification method and device | |
JP6381768B1 (en) | Learning device, learning method, learning program and operation program | |
Zhao et al. | Learning hierarchical features from generative models | |
US11574239B2 (en) | Outlier quantization for training and inference | |
US11790234B2 (en) | Resource-aware training for neural networks | |
CN104809139B (en) | Code file querying method and device | |
CN112364242B (en) | Graph convolution recommendation system for context awareness | |
CN113632106A (en) | Hybrid precision training of artificial neural networks | |
CN112347246B (en) | Self-adaptive document clustering method and system based on spectrum decomposition | |
CN111178039A (en) | Model training method and device, and method and device for realizing text processing | |
CN110502701B (en) | Friend recommendation method, system and storage medium introducing attention mechanism | |
Barboni et al. | On global convergence of ResNets: From finite to infinite width using linear parameterization | |
JP7056345B2 (en) | Data analysis systems, methods, and programs | |
CN116932686A (en) | Theme mining method and device, electronic equipment and storage medium | |
CN116521899A (en) | Improved graph neural network-based document-level relation extraction algorithm and system | |
CN112734519B (en) | Commodity recommendation method based on convolution self-encoder network | |
CN117859139A (en) | Multi-graph convolution collaborative filtering | |
CN111222722A (en) | Method, neural network model and device for business prediction for business object | |
CN117252665B (en) | Service recommendation method and device, electronic equipment and storage medium | |
Müller et al. | Estimating functionals of the error distribution in parametric and nonparametric regression | |
Wang et al. | Improvement of the kernel minimum squared error model for fast feature extraction | |
CN113486167B (en) | Text completion method, apparatus, computer device and storage medium | |
CN108230413B (en) | Image description method and device, electronic equipment and computer storage medium | |
CN117436443B (en) | Model construction method, text generation method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |