CN117520657A - In-memory search implementation method and device based on deep hash algorithm and electronic equipment - Google Patents

In-memory search implementation method and device based on deep hash algorithm and electronic equipment Download PDF

Info

Publication number
CN117520657A
CN117520657A CN202311617396.XA CN202311617396A CN117520657A CN 117520657 A CN117520657 A CN 117520657A CN 202311617396 A CN202311617396 A CN 202311617396A CN 117520657 A CN117520657 A CN 117520657A
Authority
CN
China
Prior art keywords
recommended
user
node
nodes
hash
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311617396.XA
Other languages
Chinese (zh)
Inventor
王菲
尚大山
张握瑜
李志�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN202311617396.XA priority Critical patent/CN117520657A/en
Publication of CN117520657A publication Critical patent/CN117520657A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an in-memory search implementation method and device based on a deep hash algorithm and electronic equipment, and relates to the fields of machine learning and artificial intelligence. The deep hash algorithm comprises a feature extraction layer and a hash layer, and the method comprises the following steps: extracting user node characteristics corresponding to the user nodes and article node characteristics to be recommended corresponding to the article nodes to be recommended by utilizing the characteristic extraction layer; adopting the Ha Xiceng to respectively convert all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes; when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance; the target articles to be recommended are determined based on the similarity, so that good searching accuracy can be achieved, and the method is directly applied to a recommendation model and directly deployed on an in-memory computing chip.

Description

In-memory search implementation method and device based on deep hash algorithm and electronic equipment
Technical Field
The application relates to the field of machine learning and artificial intelligence, in particular to an in-memory search implementation method and device based on a deep hash algorithm and electronic equipment.
Background
Recommendation systems have become a fundamental tool in people's daily lives, supporting various online services such as e-commerce, social media, online video and music platforms, etc. Their core function is to push the content or product that is most relevant and most likely to be of interest to the user by understanding the user's preferences, behaviors and needs. Given a user-related query, the recommendation engine can search the database for a small percentage of items that the user may like. The need to compute the similarity one-to-one for a high-dimensional real-valued user vector to each item vector in the database in this operation can result in expensive computation costs that result from complex similarity metrics and inefficient exhaustive search strategies.
In addition, deploying a recommendation system on a resource-constrained edge device is challenging because loading data from memory to processing units across a memory hierarchy can lead to high latency and high power consumption issues. The above-described problems can be effectively solved by performing parallel search operations in a memory based on a Content Addressable Memory (CAM) of a nonvolatile memory such as RRAM in an in-memory search manner.
However, this approach does not support retrieval of real-valued vectors. Recently, scholars propose to map real-valued vectors into binary vectors based on some random mapping methods, such as random downsampling and local sensitive hashing, and the like, and accelerate search operation by adopting an in-memory calculation mode, and successfully apply the binary vectors to tasks such as image retrieval, classification and the like.
However, these methods have limited performance in recommended tasks and they must use long hash codes to maintain search accuracy, which is not friendly for emerging non-volatile devices and is subject to non-ideal characteristics of the device, which is difficult to deploy directly onto an in-memory computing chip for implementation.
Disclosure of Invention
The invention aims to provide an in-memory search implementation method, an in-memory search implementation device and electronic equipment based on a deep hash algorithm, which are used for solving the problems that the prior methods are limited in performance in recommended tasks, long hash codes are required to be used for maintaining search accuracy, the method is not friendly to emerging nonvolatile devices, the device is influenced by non-ideal characteristics of the devices, and the method is difficult to directly deploy on an in-memory computing chip for implementation.
In a first aspect, the present application provides a method for implementing in-memory search based on a deep hash algorithm, where the deep hash algorithm includes a feature extraction layer and a hash layer, and the method includes:
Extracting user node characteristics corresponding to the user nodes and article node characteristics to be recommended corresponding to the article nodes to be recommended by utilizing the characteristic extraction layer;
adopting the Ha Xiceng to respectively convert all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes;
when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance;
and determining the target to-be-recommended object based on the multiple similarities.
Under the condition of adopting the technical scheme, the in-memory search implementation method based on the deep hash algorithm provided by the embodiment of the application comprises a feature extraction layer and a hash layer, and the method comprises the following steps: extracting user node characteristics corresponding to the user nodes and article node characteristics to be recommended corresponding to the article nodes to be recommended by utilizing the characteristic extraction layer; adopting the Ha Xiceng to respectively convert all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes; when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance; the target articles to be recommended are determined based on the similarity, good searching accuracy can be achieved, the method is directly applied to a recommendation model and is directly deployed on an in-memory computing chip to achieve, and accuracy is high.
In one possible implementation manner, before extracting the user node characteristics corresponding to the user node and the to-be-recommended item node characteristics corresponding to the to-be-recommended item node by using the characteristic extraction layer, the method further includes:
training and verifying the deep hash algorithm through the recommended data set.
In a possible implementation manner, the extracting, by using the feature extracting layer, a user node feature corresponding to a user node and a to-be-recommended item node feature corresponding to a to-be-recommended item node includes:
collecting node characteristics of all the articles to be recommended of the previous layer onto the corresponding user node characteristics of the previous layer, and iterating to obtain user node characteristics of the current layer;
and collecting all the user node characteristics of the previous layer to the corresponding item node characteristics to be recommended, and iterating to obtain the item node characteristics to be recommended of the current layer.
In a possible implementation manner, the converting, by using the Ha Xiceng, all the user node features and the to-be-recommended item node features output by the feature extraction layer into corresponding binary hash codes respectively includes:
respectively binarizing the node characteristics of the user node characteristics and the node characteristics of the articles to be recommended through a symbol function;
And respectively converting the user node characteristics and the node characteristics of the articles to be recommended after the node characteristics are binarized into compact binary hash codes through a binary full-connection network.
In a possible implementation manner, the training and verifying the deep hash algorithm through the recommended data set includes:
converting the recommended data set into a user-work data set with connection relation between nodes;
dividing the user-work data set into a training set and a testing set;
dividing the training set into a positive sample pair and a negative sample pair;
randomly selecting part of training data from the positive sample pair and the negative sample pair, and determining approximate hash codes of all nodes in the training data through a deep hash algorithm;
determining a positive sample pair prediction result based on the positive sample pair and the approximate hash code;
determining the distance between the user node hash codes corresponding to the negative sample pairs and all the work node hash codes;
determining a prediction error value through a cross entropy method according to the real connection relation determined by the training set;
parameters in the feature extraction layer and the Ha Xiceng are updated based on the prediction error values.
In a possible implementation manner, after the updating of the parameters in the feature extraction layer and the Ha Xiceng according to the prediction error value, the method further includes:
a plurality of user nodes randomly selected from the test set are subjected to feature extraction and a hash layer to obtain the corresponding binary hash codes;
approximating an activation function in the hash layer with a sign function, and determining a hamming distance from the user node to each work for the selected user node;
and determining the work corresponding to the minimum Hamming distance in the Hamming distances as the target object to be recommended.
In one possible implementation, the dividing the training set into positive and negative sample pairs includes:
dividing a plurality of user-work representation data with a continuous relationship in the training set into positive sample pairs;
the plurality of user-work representation data in the training set for which the borderline relationship does not exist is divided into negative sample pairs.
In one possible implementation, the determining a positive sample pair prediction result based on the positive sample pair and the approximate hash code includes:
determining the distance between the user node hash codes and all the work node hash codes in the approximate hash codes corresponding to the positive sample pairs;
And determining a positive sample pair prediction result based on the distances between the user node hash codes and all the work node hash codes in the approximate hash codes.
In a second aspect, the present application further provides an in-memory search implementation apparatus based on a deep hash algorithm, where the apparatus includes:
the feature extraction module is used for extracting user node features corresponding to the user nodes and to-be-recommended article node features corresponding to the to-be-recommended article nodes by utilizing the feature extraction layer;
the binary hash code conversion module is used for respectively converting all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes by adopting the Ha Xiceng;
the similarity determining module is used for respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance when searching;
and the target article to be recommended determining module is used for determining the target article to be recommended based on the multiple similarities.
In one possible implementation, the apparatus further includes:
and the training verification module is used for training and verifying the deep hash algorithm through the recommended data set.
In one possible implementation manner, the feature extraction module includes:
the first feature extraction submodule is used for collecting node features of all the articles to be recommended of the upper layer onto the corresponding user node features of the upper layer, and iterating to obtain the user node features of the layer;
and the second feature extraction submodule is used for collecting all user node features of the previous layer to the corresponding object node features to be recommended, and iterating to obtain the object node features to be recommended of the current layer.
In one possible implementation manner, the binary hash code conversion module includes:
the binarization submodule is used for binarizing the node characteristics of the user node characteristics and the node characteristics of the articles to be recommended through a symbol function respectively;
and the first conversion sub-module is used for respectively converting the user node characteristics and the to-be-recommended article node characteristics after the node characteristics are binarized into compact binary hash codes through a binary full-connection network.
In one possible implementation, the training verification module includes:
the second conversion sub-module is used for converting the recommended data set into a user-work data set with a connection relation between nodes;
The first dividing sub-module is used for dividing the user-work data set into a training set and a testing set;
a second dividing sub-module for dividing the training set into positive and negative sample pairs;
a first determining submodule, configured to randomly select a part of training data from the positive sample pair and the negative sample pair, and determine approximate hash codes of all nodes in the training data through a deep hash algorithm;
a second determination sub-module for determining a positive sample pair prediction result based on the positive sample pair and the approximate hash code;
a third determining submodule, configured to determine distances between the user node hash codes corresponding to the negative sample pairs and all the work node hash codes;
a fourth determining sub-module, configured to determine a prediction error value according to the real connection relationship determined by the training set by using a cross entropy method;
an updating sub-module for updating parameters in the feature extraction layer and the Ha Xiceng according to the prediction error value.
In one possible implementation, the training verification module further includes:
a third feature extraction sub-module, configured to obtain the binary hash codes corresponding to a plurality of user nodes randomly selected from the test set through feature extraction and a hash layer;
A fifth determining submodule, configured to approximate an activation function in the hash layer with a sign function, and determine, for the selected user node, a hamming distance from the user node to each work;
and the sixth determining submodule is used for determining the work corresponding to the minimum Hamming distance in the Hamming distances as the target object to be recommended.
In one possible implementation, the second dividing submodule includes:
a first dividing unit for dividing a plurality of user-work representing data in which a continuous relationship exists in the training set into positive sample pairs;
and a second dividing unit for dividing the plurality of user-work representing data in which the borderline relationship does not exist in the training set into negative sample pairs.
In one possible implementation, the second determining submodule includes:
a first determining unit, configured to determine distances between user node hash codes and all work node hash codes in the approximate hash codes corresponding to the positive sample pair;
and the second determining unit is used for determining a positive sample pair prediction result based on the distances between the user node hash codes and all the work node hash codes in the approximate hash codes.
The in-memory search implementation device based on the depth hash algorithm provided in the second aspect has the same advantages as the in-memory search implementation method based on the depth hash algorithm described in the first aspect or any possible implementation manner of the first aspect, and is not described herein.
In a third aspect, the present application further provides an electronic device, including: one or more processors; and one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause performance of the in-memory search implementation method based on a deep hash algorithm described by any of the possible implementations of the first aspect.
The beneficial effects of the electronic device provided in the third aspect are the same as the beneficial effects of the in-memory search implementation method based on the deep hash algorithm described in the first aspect or any possible implementation manner of the first aspect, and are not described herein.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 shows a flow diagram of an in-memory search implementation method based on a deep hash algorithm according to an embodiment of the present application;
Fig. 2 is a schematic flow chart of another implementation method of in-memory search based on a deep hash algorithm according to an embodiment of the present application;
fig. 3 shows a hardware schematic diagram of an in-memory search implementation based on a deep hash algorithm according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a test structure for testing a deep hash algorithm of a recommendation system on a data set according to an embodiment of the present application;
fig. 5 shows a structural flowchart of an in-memory search implementation device based on a deep hash algorithm according to an embodiment of the present application;
fig. 6 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first threshold and the second threshold are merely for distinguishing between different thresholds, and are not limited in order. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In this application, the terms "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein a, b, c can be single or multiple.
The hash algorithm based on learning utilizes training samples to learn a mapping function, maps the high-dimensional real value vector into compact hash codes with maintained similarity, and improves the searching accuracy. However, the method cannot be directly applied to the recommendation model, which can cause a significant reduction in recognition rate, and the hash code is to be stored in a storage array formed by nonvolatile devices and then to be searched in parallel in the storage array. The longer the hash code is, the higher the parallelism required for each search is, the noise (non-ideal factor) exists in many emerging nonvolatile devices, the high parallelism which is not matched is provided, and the hash code is difficult to be directly deployed on an in-memory computing chip for implementation, so that the embodiment of the application solves the problems by providing an in-memory search implementation method based on a deep hash algorithm, and the specific implementation process is as follows:
fig. 1 shows a flow chart of an in-memory search implementation method based on a deep hash algorithm, where the deep hash algorithm includes a feature extraction layer and a hash layer, and as shown in fig. 1, the in-memory search implementation method based on the deep hash algorithm includes:
step 101: and extracting user node characteristics corresponding to the user nodes and to-be-recommended article node characteristics corresponding to the to-be-recommended article nodes by utilizing the characteristic extraction layer.
In the present application, the implementation procedure of the above step 101 may include the following substeps:
substep A1: and collecting node characteristics of all the articles to be recommended of the previous layer to the corresponding user node characteristics of the previous layer, and iterating to obtain the user node characteristics of the current layer.
Substep A2: and collecting all the user node characteristics of the previous layer to the corresponding item node characteristics to be recommended, and iterating to obtain the item node characteristics to be recommended of the current layer.
Step 102: and adopting the Ha Xiceng to respectively convert all the user node characteristics and the node characteristics of the to-be-recommended article output by the characteristic extraction layer into corresponding binary hash codes.
In the application, the user node characteristic and the article node characteristic to be recommended can be respectively subjected to node characteristic binarization through a symbol function, and the user node characteristic and the article node characteristic to be recommended after the node characteristic binarization are respectively converted into compact binary hash codes through a binary full-connection network.
Step 103: and when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance.
In the application, after the hash codes of all the users and item nodes are obtained, the Hamming distance can be used for calculating and inquiring the similarity between the hash codes of the user nodes and the hash codes of all the item nodes when searching.
Step 104: and determining the target to-be-recommended object based on the multiple similarities.
In the application, the item recommended to the user is judged according to the similarity, namely the target object to be recommended.
The in-memory search implementation method based on the deep hash algorithm provided by the embodiment of the application, wherein the deep hash algorithm comprises a feature extraction layer and a hash layer, and the method comprises the following steps: extracting user node characteristics corresponding to the user nodes and article node characteristics to be recommended corresponding to the article nodes to be recommended by utilizing the characteristic extraction layer; adopting the Ha Xiceng to respectively convert all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes; when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance; the target articles to be recommended are determined based on the similarity, good searching accuracy can be achieved, the method is directly applied to a recommendation model and is directly deployed on an in-memory computing chip to achieve, and accuracy is high.
Optionally, fig. 2 shows a flowchart of another implementation method of in-memory searching based on a deep hash algorithm according to an embodiment of the present application, and referring to fig. 2, the implementation method of in-memory searching based on a deep hash algorithm includes:
step 201: training and verifying the deep hash algorithm through the recommended data set.
In this application, the implementation procedure of the step 201 may include the following substeps:
substep B1: the recommended data set is converted into a user-work data set with connection relation between nodes.
Alternatively, the recommended dataset may be a MovieLens dataset, the MovieLens comprising multiple versions, a MovieLens 1M dataset may be used, comprising 100 ten thousand scores from 6000 users for 4000 movies. The embodiment of the application does not limit the selected recommended data set specifically, and can select specifically according to the actual application scene.
In the application, firstly, the data set can be converted into an implicit representation, namely if a certain user u evaluates a certain work i excessively, the user is recorded as that a connection relation user-work (u-ipair) exists between the two nodes, and the user-work data set can be obtained.
Substep B2: the user-work data set is divided into a training set and a testing set.
Substep B3: the training set is divided into positive and negative pairs of samples.
In particular, the multiple user-work representation data in which the training set has a connective relationship may be divided into positive sample pairs; the plurality of user-work representation data in the training set for which the borderline relationship does not exist is divided into negative sample pairs.
Substep B4: and randomly selecting part of training data from the positive sample pair and the negative sample pair, and determining the approximate hash codes of all nodes in the training data through a deep hash algorithm.
Optionally, the specific value of the randomly selected portion of training data is not limited, training may be divided into multiple batches, each batch may take approximately 200 positive sample pairs and 1000 negative sample pairs, and may be limited according to a specific application scenario.
Substep B5: a positive sample pair prediction result is determined based on the positive sample pair and the approximate hash code.
In the application, the distance between the user node hash codes and all the work node hash codes in the approximate hash codes corresponding to the positive sample pair can be determined; determining a positive sample pair prediction result P based on distances between user node hash codes and all work node hash codes in the approximate hash codes θ I.e. the probability that there is a connection between two nodes in the sample pair.
Substep B6: and determining the distance between the user node hash codes corresponding to the negative sample pairs and all the work node hash codes.
Substep B7: and determining a prediction error value through a cross entropy method according to the real connection relation determined by the training set.
In the present application, the true connection is denoted as y ui Calculating a prediction error value by a cross entropy method shown in formula (1):
ι cross-entropy =∑-ylogp θ (1)。
substep B8: parameters in the feature extraction layer and the Ha Xiceng are updated based on the prediction error values.
Substep B9: and obtaining the corresponding binary hash codes from a plurality of user nodes randomly selected from the test set through feature extraction and a hash layer.
Substep B10: approximating the activation function in the hash layer with a sign function, and determining the hamming distance from the user node to each work for the selected user node.
In the application, the activation function tanh of the hash layer is approximated by a sign function, and the hamming distance of each work i is calculated for the selected user node u.
Substep B11: and determining the work corresponding to the minimum Hamming distance in the Hamming distances as the target object to be recommended.
Step 202: and extracting user node characteristics corresponding to the user nodes and to-be-recommended article node characteristics corresponding to the to-be-recommended article nodes by utilizing the characteristic extraction layer.
In this application, the implementation procedure of the step 202 may include the following substeps:
substep A1: and collecting node characteristics of all the articles to be recommended of the previous layer to the corresponding user node characteristics of the previous layer, and iterating to obtain the user node characteristics of the current layer.
Wherein the item node to be recommended is item node, in this application, the processing procedure of the user node feature and item node feature is the same toThe user node is exemplified, the characteristic extraction iterative process is shown in the formula (2), and the characteristics of the user node i of the k+1 layer are shown in the formula (2)Expressed as:
wherein N is i Representing a set of all item nodes j interacted with user node i,and (3) representing the characteristics of the item nodes in the K layer, wherein AGG represents aggregation operation, namely aggregating the characteristics of all item nodes in the upper layer onto the characteristics of the user nodes in the upper layer to obtain the characteristics of the user nodes in the layer.
Substep A2: and collecting all the user node characteristics of the previous layer to the corresponding item node characteristics to be recommended, and iterating to obtain the item node characteristics to be recommended of the current layer.
Step 203: and respectively binarizing the node characteristics of the user node characteristics and the node characteristics of the articles to be recommended through a symbol function.
In the application, the hash layer can be used to convert the node features output by the feature extraction layer into binary hashes, and the node features can be binarized first, and a symbol function shown in the formula (3) is adopted:
step 204: and respectively converting the user node characteristics and the node characteristics of the articles to be recommended after the node characteristics are binarized into compact binary hash codes through a binary full-connection network.
In this application, a binary fully-connected network may be used to convert node features into compact hash codes as shown in equation (4):
h i =tanh(w T u i +b) equation (4);
wherein w represents a weight matrix to be learned, and the weight matrix is related to node characteristics u i And (5) adding the offset b to the outer multiplication, and obtaining the approximate binary hash code of the user node through a tanh activation function.
Step 205: and when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance.
In the application, after the hash codes of all the users and item nodes are obtained, the Hamming distance can be used for calculating and inquiring the similarity between the hash codes of the user nodes and the hash codes of all the item nodes when searching.
Step 206: and determining the target to-be-recommended object based on the multiple similarities.
In the application, according to the similarity, judging the item recommended to the user, namely the target object to be recommended, specifically as shown in a formula (5):
and determining the item node with the maximum similarity as the target item to be recommended.
Fig. 3 shows a hardware schematic diagram of an in-memory search implementation based on a deep hash algorithm according to an embodiment of the present application, where, as shown in fig. 3, the whole hardware array based on RRAM is divided into two parts, a MAC (multiply-accumulate sum) array 01 used as hash code generation and a CAM (content addressable memory) array 02 used as hash code search. Firstly, performing SET or RESET operation on the resistive random access memory device to physically map the binary weight of the hash layer onto the RRAM array. And then, carrying out binarization processing on the node initial characteristics obtained by the characteristic extractor, mapping the node initial characteristics into voltage input, and realizing a hash code generation process in a hash layer through analog multiply-accumulate operation on the RRAM array. And similarly, the binary states of the resistance change memory devices are utilized to store hash codes of all work nodes on an RRAM-based CAM array, the user node hash codes are mapped into voltage inputs, and the Hamming distances of the user hash codes to be queried and all stored work hash codes are calculated on the RRAM array through reasonable input voltages and CAM cell designs. The method searches all the stored hash codes of the works in parallel on the array, and can obtain the search result in one step.
For both hr@50 and ndcg@50, a common indicator is used to evaluate the performance of the recommended system. Hr@50 (Hit Rate) at 50 refers to how many recommended items are actually of interest to the user, out of the first 50 recommendations in the recommendation list. Ndcg@50 (Normalized Discounted Cumulative Gain at 50 normalized impairment cumulative gain) is an indicator for measuring the quality of the ranking of a recommended list. The ranking quality of the recommendation system is evaluated by normalizing the accumulation of impairment of relevance in the recommendation list. HR intuitively balances whether a measurement item appears in the top K lists, NDCG embodies ranking quality by assigning a higher score to top-ranked hits (hit). NDCG: where dcg@k (impairment cumulative gain at 50) represents the cumulative value of the correlation of recommended items in the first K recommendations. Idcg@k (ideal break cumulative gain at 50) represents the cumulative value of the correlation of the first 50 recommended items in the recommended list under ideal conditions (the predicted ordering and the actual ordering agree among the first K items).
Fig. 4 shows a schematic diagram of a test structure for testing a depth hash algorithm of a recommendation system on a data set, as shown in fig. 4, for two common indexes of hr@50 and ndcg@50 for evaluating performance of the recommendation system, the vertical axis represents Accuracy (Accuracy), and for the Accuracy of the depth hash algorithm on a Graphics Processor (GPU) and a binary depth hash algorithm provided in the embodiment of the application on a Resistive Random Access Memory (RRAM), the depth hash algorithm adopts a neural network to learn data and task-specific hash functions, and compact binary hash codes generated by the two indexes can realize good search Accuracy. The method is directly applied to the recommendation model and is directly deployed on the in-memory computing chip to achieve the purpose, and the accuracy is high.
The in-memory search implementation method based on the deep hash algorithm provided by the embodiment of the application, wherein the deep hash algorithm comprises a feature extraction layer and a hash layer, and the method comprises the following steps: extracting user node characteristics corresponding to the user nodes and article node characteristics to be recommended corresponding to the article nodes to be recommended by utilizing the characteristic extraction layer; adopting the Ha Xiceng to respectively convert all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes; when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance; the target articles to be recommended are determined based on the similarity, good searching accuracy can be achieved, the method is directly applied to a recommendation model and is directly deployed on an in-memory computing chip to achieve, and accuracy is high.
Fig. 5 shows a schematic structural diagram of an in-memory search implementation device based on a deep hash algorithm according to an embodiment of the present application, and as shown in fig. 5, the in-memory search implementation device 300 based on a deep hash algorithm includes:
The feature extraction module 301 is configured to extract user node features corresponding to user nodes and item node features to be recommended corresponding to item nodes to be recommended by using the feature extraction layer;
the binary hash code conversion module 302 is configured to convert all the user node features and the to-be-recommended object node features output by the feature extraction layer into corresponding binary hash codes by using the Ha Xiceng respectively;
the similarity determining module 303 is configured to determine, when searching, similarity between the binary hash codes corresponding to the user node and the binary hash codes of all the to-be-recommended object nodes through hamming distances respectively;
the target item to be recommended determining module 304 is configured to determine a target item to be recommended based on a plurality of the similarities.
In one possible implementation, the apparatus further includes:
and the training verification module is used for training and verifying the deep hash algorithm through the recommended data set.
In one possible implementation manner, the feature extraction module includes:
the first feature extraction submodule is used for collecting node features of all the articles to be recommended of the upper layer onto the corresponding user node features of the upper layer, and iterating to obtain the user node features of the layer;
And the second feature extraction submodule is used for collecting all user node features of the previous layer to the corresponding object node features to be recommended, and iterating to obtain the object node features to be recommended of the current layer.
In one possible implementation manner, the binary hash code conversion module includes:
the binarization submodule is used for binarizing the node characteristics of the user node characteristics and the node characteristics of the articles to be recommended through a symbol function respectively;
and the first conversion sub-module is used for respectively converting the user node characteristics and the to-be-recommended article node characteristics after the node characteristics are binarized into compact binary hash codes through a binary full-connection network.
In one possible implementation, the training verification module includes:
the second conversion sub-module is used for converting the recommended data set into a user-work data set with a connection relation between nodes;
the first dividing sub-module is used for dividing the user-work data set into a training set and a testing set;
a second dividing sub-module for dividing the training set into positive and negative sample pairs;
a first determining submodule, configured to randomly select a part of training data from the positive sample pair and the negative sample pair, and determine approximate hash codes of all nodes in the training data through a deep hash algorithm;
A second determination sub-module for determining a positive sample pair prediction result based on the positive sample pair and the approximate hash code;
a third determining submodule, configured to determine distances between the user node hash codes corresponding to the negative sample pairs and all the work node hash codes;
a fourth determining sub-module, configured to determine a prediction error value according to the real connection relationship determined by the training set by using a cross entropy method;
an updating sub-module for updating parameters in the feature extraction layer and the Ha Xiceng according to the prediction error value.
In one possible implementation, the training verification module further includes:
a third feature extraction sub-module, configured to obtain the binary hash codes corresponding to a plurality of user nodes randomly selected from the test set through feature extraction and a hash layer;
a fifth determining submodule, configured to approximate an activation function in the hash layer with a sign function, and determine, for the selected user node, a hamming distance from the user node to each work;
and the sixth determining submodule is used for determining the work corresponding to the minimum Hamming distance in the Hamming distances as the target object to be recommended.
In one possible implementation, the second dividing submodule includes:
a first dividing unit for dividing a plurality of user-work representing data in which a continuous relationship exists in the training set into positive sample pairs;
and a second dividing unit for dividing the plurality of user-work representing data in which the borderline relationship does not exist in the training set into negative sample pairs.
In one possible implementation, the second determining submodule includes:
a first determining unit, configured to determine distances between user node hash codes and all work node hash codes in the approximate hash codes corresponding to the positive sample pair;
and the second determining unit is used for determining a positive sample pair prediction result based on the distances between the user node hash codes and all the work node hash codes in the approximate hash codes.
The in-memory searching implementation device based on the deep hash algorithm provided by the embodiment of the application utilizes the feature extraction layer to extract user node features corresponding to the user nodes and to-be-recommended article node features corresponding to the to-be-recommended article nodes; adopting the Ha Xiceng to respectively convert all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes; when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance; the target articles to be recommended are determined based on the similarity, good searching accuracy can be achieved, the method is directly applied to a recommendation model and is directly deployed on an in-memory computing chip to achieve, and accuracy is high.
The in-memory search implementation device based on the depth hash algorithm is applied to the in-memory search implementation method based on the depth hash algorithm shown in any one of fig. 1 to 4, and is not repeated here for avoiding repetition.
The electronic device in the embodiment of the application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The electronic device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
Fig. 6 shows a schematic hardware structure of an electronic device according to an embodiment of the present application. As shown in fig. 6, the electronic device 400 includes a processor 410.
As shown in FIG. 6, the processor 410 may be a general purpose central processing unit (central processing unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the programs of the present application.
As shown in fig. 6, the electronic device 400 may further include a communication line 440. Communication line 440 may include a path to communicate information between the above-described components.
Optionally, as shown in fig. 6, the electronic device may further include a communication interface 420. The communication interface 420 may be one or more. Communication interface 420 may use any transceiver-like device for communicating with other devices or communication networks.
Optionally, as shown in fig. 6, the electronic device may also include a memory 430. Memory 430 is used to store computer-executable instructions for performing aspects of the present application and is controlled by the processor for execution. The processor is configured to execute computer-executable instructions stored in the memory, thereby implementing the method provided in the embodiments of the present application.
As shown in fig. 6, the memory 430 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 430 may be stand alone and be coupled to the processor 410 via a communication line 440. Memory 430 may also be integrated with processor 410.
Alternatively, the computer-executable instructions in the embodiments of the present application may be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
In a particular implementation, as one embodiment, as shown in FIG. 6, processor 410 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 6.
In a specific implementation, as an embodiment, as shown in fig. 6, the terminal device may include a plurality of processors, such as a first processor 4101 and a second processor 4102 in fig. 6. Each of these processors may be a single-core processor or a multi-core processor.
Fig. 7 is a schematic structural diagram of a chip according to an embodiment of the present application. As shown in fig. 7, the chip 500 includes one or more (including two) processors 410.
Optionally, as shown in fig. 7, the chip further includes a communication interface 420 and a memory 430, and the memory 430 may include a read-only memory and a random access memory, and provides operation instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
In some implementations, as shown in FIG. 7, the memory 430 stores elements, execution modules or data structures, or a subset thereof, or an extended set thereof.
In the embodiment of the present application, as shown in fig. 7, by calling the operation instruction stored in the memory (the operation instruction may be stored in the operating system), the corresponding operation is performed.
As shown in fig. 7, the processor 410 controls processing operations of any one of the terminal devices, and the processor 410 may also be referred to as a central processing unit (central processing unit, CPU).
As shown in fig. 7, memory 430 may include read only memory and random access memory, and provides instructions and data to the processor. A portion of the memory 430 may also include NVRAM. Such as a memory, a communication interface, and a memory coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus system 540 in fig. 7.
As shown in fig. 7, the method disclosed in the embodiment of the present application may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
In one aspect, a computer readable storage medium is provided, in which instructions are stored, which when executed, implement the functions performed by the terminal device in the above embodiments.
In one aspect, a chip is provided, where the chip is applied to a terminal device, and the chip includes at least one processor and a communication interface, where the communication interface is coupled to the at least one processor, and the processor is configured to execute instructions to implement a function performed by an in-memory search implementation method based on a deep hash algorithm in the foregoing embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a terminal, a user equipment, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wired or wireless means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, e.g., floppy disk, hard disk, tape; optical media, such as digital video discs (digital video disc, DVD); but also semiconductor media such as solid state disks (solid state drive, SSD).
Although the present application has been described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the figures, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the present application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to include such modifications and variations as well.

Claims (10)

1. The in-memory search implementation method based on the deep hash algorithm is characterized in that the deep hash algorithm comprises a feature extraction layer and a hash layer, and the method comprises the following steps:
extracting user node characteristics corresponding to the user nodes and article node characteristics to be recommended corresponding to the article nodes to be recommended by utilizing the characteristic extraction layer;
adopting the Ha Xiceng to respectively convert all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes;
when searching, respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance;
and determining the target to-be-recommended object based on the multiple similarities.
2. The method of claim 1, further comprising, prior to extracting, with the feature extraction layer, user node features corresponding to user nodes and item node features to be recommended corresponding to item nodes to be recommended:
training and verifying the deep hash algorithm through the recommended data set.
3. The method according to claim 1, wherein the extracting, by using the feature extraction layer, user node features corresponding to user nodes and item node features to be recommended corresponding to item nodes to be recommended includes:
Collecting node characteristics of all the articles to be recommended of the previous layer onto the corresponding user node characteristics of the previous layer, and iterating to obtain user node characteristics of the current layer;
and collecting all the user node characteristics of the previous layer to the corresponding item node characteristics to be recommended, and iterating to obtain the item node characteristics to be recommended of the current layer.
4. The method according to claim 1, wherein said employing said Ha Xiceng to convert all of said user node features and said item to be recommended node features output by said feature extraction layer into corresponding binary hash codes, respectively, comprises:
respectively binarizing the node characteristics of the user node characteristics and the node characteristics of the articles to be recommended through a symbol function;
and respectively converting the user node characteristics and the node characteristics of the articles to be recommended after the node characteristics are binarized into compact binary hash codes through a binary full-connection network.
5. The method of claim 2, wherein the training and validating of the deep hashing algorithm by recommending a dataset comprises:
converting the recommended data set into a user-work data set with connection relation between nodes;
Dividing the user-work data set into a training set and a testing set;
dividing the training set into a positive sample pair and a negative sample pair;
randomly selecting part of training data from the positive sample pair and the negative sample pair, and determining approximate hash codes of all nodes in the training data through a deep hash algorithm;
determining a positive sample pair prediction result based on the positive sample pair and the approximate hash code;
determining the distance between the user node hash codes corresponding to the negative sample pairs and all the work node hash codes;
determining a prediction error value through a cross entropy method according to the real connection relation determined by the training set;
parameters in the feature extraction layer and the Ha Xiceng are updated based on the prediction error values.
6. The method of claim 5, further comprising, after the updating parameters in the feature extraction layer and the Ha Xiceng based on the prediction error value:
a plurality of user nodes randomly selected from the test set are subjected to feature extraction and a hash layer to obtain the corresponding binary hash codes;
approximating an activation function in the hash layer with a sign function, and determining a hamming distance from the user node to each work for the selected user node;
And determining the work corresponding to the minimum Hamming distance in the Hamming distances as the target object to be recommended.
7. The method of claim 5, wherein the dividing the training set into positive and negative pairs of samples comprises:
dividing a plurality of user-work representation data with a continuous relationship in the training set into positive sample pairs;
the plurality of user-work representation data in the training set for which the borderline relationship does not exist is divided into negative sample pairs.
8. The method of claim 5, wherein the determining a positive sample pair prediction result based on the positive sample pair and the approximate hash code comprises:
determining the distance between the user node hash codes and all the work node hash codes in the approximate hash codes corresponding to the positive sample pairs;
and determining a positive sample pair prediction result based on the distances between the user node hash codes and all the work node hash codes in the approximate hash codes.
9. An in-memory search implementation device based on a deep hash algorithm, wherein the device comprises:
the feature extraction module is used for extracting user node features corresponding to the user nodes and to-be-recommended article node features corresponding to the to-be-recommended article nodes by utilizing the feature extraction layer;
The binary hash code conversion module is used for respectively converting all the user node characteristics and the to-be-recommended article node characteristics output by the characteristic extraction layer into corresponding binary hash codes by adopting the Ha Xiceng;
the similarity determining module is used for respectively determining the similarity between the binary hash codes corresponding to the user nodes and the binary hash codes of all the to-be-recommended object nodes through the Hamming distance when searching;
and the target article to be recommended determining module is used for determining the target article to be recommended based on the multiple similarities.
10. An electronic device, comprising: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause performance of the depth hash algorithm-based in-memory search implementation method of any of claims 1-8.
CN202311617396.XA 2023-11-29 2023-11-29 In-memory search implementation method and device based on deep hash algorithm and electronic equipment Pending CN117520657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311617396.XA CN117520657A (en) 2023-11-29 2023-11-29 In-memory search implementation method and device based on deep hash algorithm and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311617396.XA CN117520657A (en) 2023-11-29 2023-11-29 In-memory search implementation method and device based on deep hash algorithm and electronic equipment

Publications (1)

Publication Number Publication Date
CN117520657A true CN117520657A (en) 2024-02-06

Family

ID=89749230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311617396.XA Pending CN117520657A (en) 2023-11-29 2023-11-29 In-memory search implementation method and device based on deep hash algorithm and electronic equipment

Country Status (1)

Country Link
CN (1) CN117520657A (en)

Similar Documents

Publication Publication Date Title
CN110278175B (en) Graph structure model training and garbage account identification method, device and equipment
US20200110842A1 (en) Techniques to process search queries and perform contextual searches
US20090307176A1 (en) Clustering-based interest computation
CN111612039A (en) Abnormal user identification method and device, storage medium and electronic equipment
CN110688974A (en) Identity recognition method and device
US20160371538A1 (en) Accelerating Object Detection
CN111125658B (en) Method, apparatus, server and storage medium for identifying fraudulent user
CN113360803B (en) Data caching method, device, equipment and storage medium based on user behaviors
CN111008620A (en) Target user identification method and device, storage medium and electronic equipment
CN111966886A (en) Object recommendation method, object recommendation device, electronic equipment and storage medium
CN111898380A (en) Text matching method and device, electronic equipment and storage medium
CN113987119A (en) Data retrieval method, cross-modal data matching model processing method and device
CN111310743B (en) Face recognition method and device, electronic equipment and readable storage medium
CN116822651A (en) Large model parameter fine adjustment method, device, equipment and medium based on incremental learning
CN113837635A (en) Risk detection processing method, device and equipment
CN115712866A (en) Data processing method, device and equipment
US9424484B2 (en) Feature interpolation
US20230281696A1 (en) Method and apparatus for detecting false transaction order
CN116542673B (en) Fraud identification method and system applied to machine learning
CN112364198A (en) Cross-modal Hash retrieval method, terminal device and storage medium
KR20180028610A (en) Machine learning method using relevance vector machine, computer program implementing the same and informaion processintg device configured to perform the same
CN117520657A (en) In-memory search implementation method and device based on deep hash algorithm and electronic equipment
CN113780318B (en) Method, device, server and medium for generating prompt information
CN104965853A (en) Method and system for recommending aggregation application, method and device for aggregating various recommendation resources
CN112232417A (en) Classification method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination