CN105893477A - Distance preserving Hash method based on double-circuit neural network - Google Patents

Distance preserving Hash method based on double-circuit neural network Download PDF

Info

Publication number
CN105893477A
CN105893477A CN201610186444.8A CN201610186444A CN105893477A CN 105893477 A CN105893477 A CN 105893477A CN 201610186444 A CN201610186444 A CN 201610186444A CN 105893477 A CN105893477 A CN 105893477A
Authority
CN
China
Prior art keywords
object function
centerdot
neural network
hash method
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610186444.8A
Other languages
Chinese (zh)
Inventor
周文罡
王敏
李厚强
田奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201610186444.8A priority Critical patent/CN105893477A/en
Publication of CN105893477A publication Critical patent/CN105893477A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a distance preserving Hash method based on a double-circuit neural network. The distance preserving Hash method comprises the following steps: utilizing an unsupervised Hash method to generate a binary code for each training data point, and forming a data pair by pairwise data training points and the binary codes corresponding to the pairwise data training points; inputting the data pair into the double-circuit neural network, wherein the training target of the double-circuit neural network is that the linear distance transformation relationship of the data pair in different spaces is kept and the fidelity of an original unsupervised Hash method is kept; alternately updating neural network parameters and linear space transformation parameters, taking any circuit of the double-circuit neural network until convergence is carried out or set iterations are achieved to form a new Hash function obtained by learning. The method disclosed by the invention can obviously improve the retrieval performance of the unsupervised Hash method.

Description

A kind of guarantor based on two-way neutral net is away from hash method
Technical field
The present invention relates to multimedia technology field, particularly relate to a kind of guarantor based on two-way neutral net away from hash method.
Background technology
It is a basic problem in computer vision and multimedia application that approximate KNN searches technology.Look into for given one Asking sample, approximate KNN searches technology can find query sample with the highest probability from a big data set Neighbour, and time complexity is linear the most even constant time complexity.Approximate KNN lookup technology mainly there are two classes Method, is method based on tree and hash method respectively.Dimension disaster problem is there is in method based on tree in higher dimensional space, and Hash method, owing to also showing superperformance in higher dimensional space, becomes to be becoming increasingly popular.
According to whether use training data, existing hash method can be divided into two classes, is data-dependent method sum respectively According to independent solution.Dynamic data exchange method does not use training data, generally market demand Random Maps is generated binary code, and And have theoretical proof: use data independence hash method, in Hamming space (Hamming space), the office of luv space Portion's neighbours' structure is still retained.The shortcoming of this data independence method is in large-scale application, can connect to obtain The performance being subject to, code length is the longest, and the elongated meeting of code length makes recall rate (recall) step-down.In order to promote recall rate, this kind of side Method would generally use multiple Hash table, but new problem EMS memory occupation can be caused again big for this and computation complexity is high.
Therefore, data dependence hash method starts studied, by using training dataset to learn more to be compacted Binary code.By the binary code being mapped as compacting by high dimensional data, approximate KNN search can be multiple at linear session Complete in miscellaneous degree.Whether containing label or classification information according to training data, data-dependent method can be further classified as again Three classes, are to have measure of supervision respectively, semi-supervised method and unsupervised approaches.
There is supervision hash method to use the training data with label information to be trained, and semi-supervised method uses part Training data with label information is trained.The learning process form of hash function generally can be turned to by this two classes method Classification problem or optimization problem.Data to or the information of tlv triple be generally considered to instruct hash function into object function Study.Although in most of documents, supervision and semi-supervised hash method is had to obtain higher property compared to unsupervised approaches Can, but in a lot of actual application, classification information or label data are the most unobtainable.This causes without supervision hash method still It is widely studied use.
The categorical data without any label information is used without supervision hash method.This kind of method generally utilizes data to be distributed Information or the inherent attribute (such as balance and independence) of good binary code retain data neighbours' structure, minimum quantization Error.These constraints substantially belong to the constraint of a single point, and the most directly the guarantor of reflection Hash is away from target.Must in consideration of it, have A kind of method studying universality improves the performance without supervision hash method.
Summary of the invention
It is an object of the invention to provide a kind of guarantor based on two-way neutral net away from hash method, can significantly improve without prison Superintend and direct the retrieval performance of hash method.
It is an object of the invention to be achieved through the following technical solutions:
A kind of guarantor based on two-way neutral net away from hash method, including:
Utilize without supervision hash method to each training data point produce binary code, and will two-by-two training data point and Corresponding binary code constitutes data pair;
By data in input to two-way neutral net, the training objective of this two-way neutral net is to maintain data to not Linear range transformation relation in isospace also keeps the original fidelity without supervision hash method;
By alternately updating neural network parameter and linear range transformation parameter, until restraining or reaching the iteration set After number of times, take any one road of two-way neutral net, i.e. constitute the new hash function that study obtains.
Further, the described holding data object function expression formula to the linear range transformation relation in different spaces For:
Φ = 1 2 N p · N || H - a E - b || F 2 ;
Wherein, N is the quantity of training data point, N in training setpQuantity for data pair;A and b is linear range conversion Parameter;E be data to Euclidean distance matrix, H is the data Hamming distance matrixes to binary code, E with
(i j) represents training data point x to element E in matrix EiWith xjEuclidean distance;Element H in matrix H (i, j) Represent and practice data point xiWith xjCorresponding binary code biWith bjHamming distance;Assume that each binary code is L bit, then two Ary codes biWith bjHamming distance be expressed as:
H ( i , j ) = L - b i T b j - ( 1 L × 1 - b i ) T ( 1 L × 1 - b j ) .
Further, the object function expression formula without the fidelity of supervision hash method that described holding is original is:
Ψ = 1 2 N || B - U || 2 2
Wherein, B withThe every string of B Yu U is all a binary code of a training data point;Every string of U Being generated by without supervision hash method, every string of B is equal to the output of a road neutral net again through the result of binary conversion treatment.
Further, described by alternately updating neural network parameter and linear range transformation parameter, until convergence or The iterations reaching to set includes:
Two object functions are merged, then have:
Φ + λ Ψ + β || W || 2 = = 1 2 N p · N || H - a E - b || F 2 + λ 2 N || B - U || 2 2 + β || W || 2 ;
s.t.B∈{0,1}N×L
Wherein, λ be in Controlling object function before the parameter of two parts importance ratio, the regularization of above-mentioned Section 3 Constraints is used for reducing the amplitude of neural network parameter, and β is regularization term coefficient, and W is neural network parameter;
Obtain parameter W of optimum, a and b by minimizing above-mentioned object function, and remove binary system constraint, use god Output through networkThe B in above-mentioned object function, the most final object function is replaced to be:
min W , a , b 1 2 N ρ · N || H ~ - a E - b || F 2 + λ 2 N || B ~ - U || 2 2 + β || W || 2
s . t . B ~ ∈ { 0 , 1 } N × L
In above formula, matrixIn element With For biWith bjRemove the result of binary system constraint;
The mode of alternative expression optimization carries out solving final object function:
Preset parameter W, updates a and b, then the optimization of object function becomes:
min a , b 1 2 N p · N || H ~ - a E - b || F 2 ;
Object function after above-mentioned optimization can be solved by maximum likelihood method;
Preset parameter a and b, updates W, then the optimization of object function becomes:
min W L ( W ) = 1 2 N p · N || H ~ - a E - b || F 2 + λ 2 N || B ~ - U || 2 2 + + β || W || 2 ;
Object function generated vector form by after above-mentioned optimization:
L ( W ) = 1 2 N p · N Σ i Σ j ( h ~ i , j - ae i , j - b ) 2 + λ 2 N Σ k || b ~ k - u k || 2 + β || W || 2
In above formula,According to the chain rule of derivation, the gradient table of the object function after optimization is shown as:
∂ L ( W ) ∂ W = Σ i = 1 N ∂ L ( W ) ∂ b ~ i · ∂ b ~ i ∂ W = 1 N p · N Σ i Σ j ( h ~ i , j - ae i , j - b ) ( ∂ h ~ i , j ∂ b ~ i · ∂ b ~ i ∂ W + ∂ h ~ i , j ∂ b ~ j · ∂ b ~ j ∂ W ) + λ N Σ k ( b ~ k - u k ) ∂ b ~ k ∂ W + 2 β W ;
Wherein, WithIt is target function gradient, by rear Neural network parameter matrix W is updated to propagation algorithm;
Repeat above-mentioned alternative expression and optimize process, until restraining or reaching the iterations set.
As seen from the above technical solution provided by the invention, two-way neutral net is used to go to retain linear range conversion Relation also retains the original Hash mapping relation without supervision hash method simultaneously;Simultaneously for two-way neutral net by alternately The mode optimized learns to obtain hash function;It is based on a constraint without supervision hash method that the program goes for great majority, And their retrieval performance can be significantly improved.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, required use in embodiment being described below Accompanying drawing be briefly described, it should be apparent that, below describe in accompanying drawing be only some embodiments of the present invention, for this From the point of view of the those of ordinary skill in field, on the premise of not paying creative work, it is also possible to obtain other according to these accompanying drawings Accompanying drawing.
A kind of based on two-way neutral net the flow chart protected away from hash method that Fig. 1 provides for the embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Ground describes, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments.Based on this Inventive embodiment, the every other enforcement that those of ordinary skill in the art are obtained under not making creative work premise Example, broadly falls into protection scope of the present invention.
A kind of based on two-way neutral net the flow chart protected away from hash method that Fig. 1 provides for the embodiment of the present invention.As Shown in Fig. 1, it mainly comprises the steps:
Step 11, utilization produce binary code without supervision hash method to each training data point, and will train number two-by-two The binary code of strong point and correspondence thereof constitutes data pair.
It will be understood by those skilled in the art that utilization produces binary code without supervision hash method to each training data point Can be achieved by the prior art.
Step 12, by data to input in two-way neutral net, the training objective of this two-way neutral net is to maintain number According to the linear range transformation relation in different spaces and keep the original fidelity without supervision hash method.
In the embodiment of the present invention, described two-way neutral net has identical structure, parameter and weight matrix, uniquely Difference be the different training data point of input.
In the embodiment of the present invention, keep in original Euclidean space and Hamming space linear range conversion (it is to say, Data are to the Euclidean distance in original Euclidean space, with data to corresponding binary code Hamming distance in Hamming space Between should keep linear transformation relation), and keep simultaneously original without supervision hash method fidelity.Below this for this two Individual target is described in detail.
1, linear range transformation relation.
Metric relation of keeping at a distance in different characteristic space is the inherent characteristics of binary system hash method.More existing Kazakhstan Uncommon method uses and Hamming distance and the Euclidean distance of data pair is mapped under same yardstick, such as, [0,1], then minimize The two map after the method for deviation of distance realize protecting away from.However it has been found that this constraint does not generally have due to too strict Obtain good performance, and actually have only to keep the Linear Mapping between two original Euclidean distances and Hamming distance to close It is that this constraint is experimentally confirmed and can significantly increase the original performance without supervision hash method.
The object function expression formula of the linear range transformation relation in different spaces is by described holding data:
Φ = 1 2 N p · N || H - a E - b || F 2 ;
Wherein, N is the quantity of training data point, N in training setpQuantity for data pair;A and b is linear range conversion Parameter;||·||FRepresenting Frobenius norm, it is a kind of matrix norm, squared again equal to element each in matrix Summation;E be data to Euclidean distance matrix, H is the data Hamming distance matrixes to binary code, E withMatrix E In element E (i, j) represent training data point xiWith xjEuclidean distance;(i j) represents training data point to element H in matrix H xiWith xjCorresponding binary code biWith bjHamming distance.
Above-mentioned object function is similar to the object function of least square method, and difference is the object function at us In, H-matrix is a centrifugal pump matrix, because the Hamming distance between each two L bit binary code is only possible to get a L+1 Value, is 0 respectively, 1,2 ..., L.In order to solve conveniently, in the embodiment of the present invention, calculating and general calculating side of Hamming distance Method (calculating the number of ' 1 ' in the XOR code of two binary codes) is different, H in the embodiment of the present invention (i, expression formula j) is:
H ( i , j ) = L - b i T b j - ( 1 L × 1 - b i ) T ( 1 L × 1 - b j ) .
2, the fidelity of original unsupervised approaches is kept.
There is the most traditional nothing supervision hash method at present, thought over the binary code that the distribution character of data is become reconciled Build-in attribute.The scheme of the embodiment of the present invention is directed to promote the performance of these unsupervised approaches existed.
The object function expression formula of the fidelity without supervising hash method that described holding is original is:
Ψ = 1 2 N || B - U || 2 2
Wherein, B withThe every string of B Yu U is all a binary code of a training data point;Every string of U Being generated by without supervision hash method, every string of B is that the output of a road neutral net is entered through the two of binary conversion treatment acquisition again Code processed.
Step 13, by alternately updating neural network parameter and linear range transformation parameter, until convergence or reach to set After fixed iterations, take any one road of two-way neutral net, i.e. constitute the new hash function that study obtains.
After obtaining two object functions in step 12, two object functions can be merged, then have:
Φ + λ Ψ + β || W || 2 = = 1 2 N p · N || H - a E - b || F 2 + λ 2 N || B - U || 2 2 + β || W || 2 ;
s.t.B∈{0,1}N×L
Wherein, λ be in Controlling object function before the parameter of two parts importance ratio, above-mentioned Section 3 regularization Constraints is used for reducing the amplitude of variation of neural network parameter, and β is regularization term coefficient, and W is neural network parameter;
Parameter W of optimum, a and b is obtained by minimizing above-mentioned object function, due to the existence of binary system constraint, on Object function after the fusion in face not directly solves.In order to process this problem, directly remove binary system constraint, use nerve The output of networkThe B in above-mentioned object function, the most final object function is replaced to be:
min W , a , b 1 2 N ρ · N || H ~ - a E - b || F 2 + λ 2 N || B ~ - U || 2 2 + β || W || 2
s . t . B ~ ∈ { 0 , 1 } N × L
In above formula, matrixIn element With Represent neutral net output, required object binary code biWith bjForWithCarry out the result of binaryzation;
But above-mentioned object function is still difficult to direct solution, in the embodiment of the present invention, use the mode that alternative expression optimizes Carry out solving final object function:
1) preset parameter W, updates a and b, then the optimization of object function becomes:
min a , b 1 2 N p · N || H ~ - a E - b || F 2 ;
By matrixDuring with two column vectors of E generated, above this target fade-out be linear regression problem, can be by maximum Likelihood method direct solution.
2) preset parameter a and b, updates W, then the optimization of object function becomes:
min W L ( W ) = 1 2 N p · N || H ~ - a E - b || F 2 + λ 2 N || B ~ - U || 2 2 + + β || W || 2 ;
Owing to object function L (W) is continuous print, it is possible to by the back-propagating methodology acquistion of neutral net to god Parameter matrix W through network.Compared to the back-propagating method of traditional neutral net, the two-way in the embodiment of the present invention is neural The back-propagating method of network only carries out the amendment of part.Specifically, can by the object function generated in above formula to Amount form:
L ( W ) = 1 2 N p · N Σ i Σ j ( h ~ i , j - ae i , j - b ) 2 + λ 2 N Σ k || b ~ k - u k || 2 + β || W || 2
In above formula,ei,jukRepresenting matrix E, binary code B, binary code U become vector respectively The result of form, according to the chain rule of derivation, the gradient table of the object function after optimization is shown as:
∂ L ( W ) ∂ W = Σ i = 1 N ∂ L ( W ) ∂ b ~ i · ∂ b ~ i ∂ W = 1 N p · N Σ i Σ j ( h ~ i , j - ae i , j - b ) ( ∂ h ~ i , j ∂ b ~ i · ∂ b ~ i ∂ W + ∂ h ~ i , j ∂ b ~ j · ∂ b ~ j ∂ W ) + λ N Σ k ( b ~ k - u k ) ∂ b ~ k ∂ W + 2 β W ;
Wherein, WithIt is target function gradient, by rear Network parameter W is updated to propagation algorithm;By contrast it is found that the backward biography of two-way neutral net in the embodiment of the present invention The unique difference broadcasting algorithm and general Back Propagation Algorithm is that eachCoefficient above is the most identical.Therefore, use Back Propagation Algorithm in the embodiment of the present invention updates two-way neutral net.
Repeat above-mentioned alternative expression and optimize process 1)~2), until restraining or reaching the iterations set.Due to two-way god Identical through the two-way of network, corresponding parameter matrix W has obtained, and can directly take the most any one road network as Kazakhstan Uncommon mapping function, then the output of neutral net is carried out binary conversion treatment can obtain final binary code.
In embodiment of the present invention such scheme, two-way neutral net is used to go retain linear range transformation relation and protect simultaneously Stay the original Hash mapping relation without supervision hash method;Learn by the way of alternative optimization simultaneously for two-way neutral net Acquistion is to hash function;It is based on a constraint without supervision hash method that the program goes for great majority, it is possible to significantly carries Their retrieval performance of height.
Through the above description of the embodiments, those skilled in the art it can be understood that to above-described embodiment can To be realized by software, it is also possible to the mode adding necessary general hardware platform by software realizes.Based on such understanding, The technical scheme of above-described embodiment can embody with the form of software product, this software product can be stored in one non-easily The property lost storage medium (can be CD-ROM, USB flash disk, portable hard drive etc.) in, including some instructions with so that a computer sets Standby (can be personal computer, server, or the network equipment etc.) performs the method described in each embodiment of the present invention.
The above, the only present invention preferably detailed description of the invention, but protection scope of the present invention is not limited thereto, Any those familiar with the art in the technical scope of present disclosure, the change that can readily occur in or replacement, All should contain within protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims Enclose and be as the criterion.

Claims (4)

1. a guarantor based on two-way neutral net is away from hash method, it is characterised in that including:
Utilize without supervision hash method each training data point generation binary code, and by training data point and correspondence thereof two-by-two Binary code constitute data pair;
By data in input to two-way neutral net, the training objective of this two-way neutral net is to maintain data to different empty Interior linear range transformation relation also keeps original nothing to supervise the fidelity of hash method;
By alternately updating neural network parameter and linear range transformation parameter, until restraining or reaching the iterations set After, take any one road of two-way neutral net, i.e. constitute the new hash function that study obtains.
Method the most according to claim 1, it is characterised in that described holding data are to the linear range in different spaces The object function expression formula of transformation relation is:
Φ = 1 2 N p · N | | H - a E - b | | F 2 ;
Wherein, N is the quantity of training data point, N in training setpQuantity for data pair;A and b is the ginseng of linear range conversion Number;E be data to Euclidean distance matrix, H is the data Hamming distance matrixes to binary code, E with
(i j) represents training data point x to element E in matrix EiWith xjEuclidean distance;(i j) represents element H in matrix H Practice data point xiWith xjCorresponding binary code biWith bjHamming distance;Assume that each binary code is L bit, then binary system Code biWith bjHamming distance be expressed as:
H ( i , j ) = L - b i T b j - ( 1 L × 1 - b i ) T ( 1 L × 1 - b j ) .
Method the most according to claim 2, it is characterised in that the fidelity without supervision hash method that described holding is original Object function expression formula be:
Ψ = 1 2 N | | B - U | | 2 2
Wherein, B withThe every string of B Yu U is all a binary code of a training data point;Every string of U is by nothing Supervision hash method generates, and every string of B is equal to the output of a road neutral net again through the result of binary conversion treatment.
The most according to the method in claim 2 or 3, it is characterised in that described by alternately update neural network parameter and Linear range transformation parameter, includes until the iterations restrained or reach to set:
Two object functions are merged, then have:
Φ + λ Ψ + β | | W | | 2 = = 1 2 N p · N | | H - a E - b | | F 2 + λ 2 N | | B - U | | 2 2 + β | | W | | 2 ; s . t . B ∈ { 0 , 1 } N × L ;
Wherein, λ be in Controlling object function before the parameter of two parts importance ratio, the regularization constraint of above-mentioned Section 3 Condition is used for reducing the amplitude of neural network parameter, and β is regularization term coefficient, and W is neural network parameter;
Obtain parameter W of optimum, a and b by minimizing above-mentioned object function, and remove binary system constraint, use nerve net The output of networkThe B in above-mentioned object function, the most final object function is replaced to be:
min W , a , b 1 2 N p · N | | H ~ - a E - b | | F 2 + λ 2 N | | B ~ - U | | 2 2 + β | | W | | 2 s . t . B ~ ∈ { 0 , 1 } N × L
In above formula, matrixIn element WithFor bi With bjRemove the result of binary system constraint;
The mode of alternative expression optimization carries out solving final object function:
Preset parameter W, updates a and b, then the optimization of object function becomes:
min a , b 1 2 N p · N | | H ~ - a E - b | | F 2 ;
Object function after above-mentioned optimization can be solved by maximum likelihood method;
Preset parameter a and b, updates W, then the optimization of object function becomes:
min W L ( W ) = 1 2 N p · N | | H ~ - a E - b | | F 2 + λ 2 N | | B ~ - U | | 2 2 + + β | | W | | 2 ;
Object function generated vector form by after above-mentioned optimization:
L ( W ) = 1 2 N p · N Σ i Σ j ( h ~ i , j - ae i , j - b ) 2 + λ 2 N Σ k | | b ~ k - u k | | 2 + β | | W | | 2
In above formula,According to the chain rule of derivation, the gradient table of the object function after optimization is shown as:
∂ L ( W ) ∂ W = Σ i = 1 N ∂ L ( W ) ∂ b ~ i · ∂ b ~ i ∂ W = 1 N p · N Σ i Σ j ( h ~ i , j - ae i , j - b ) ( ∂ h ~ i , j ∂ b ~ i · ∂ b ~ i ∂ W + ∂ h ~ i , j ∂ b ~ j · ∂ b ~ j ∂ W ) + λ N Σ k ( b ~ k - u k ) ∂ b ~ k ∂ W + 2 β W ;
Wherein, WithIt is target function gradient, by backward biography Broadcast algorithm and update neural network parameter matrix W;
Repeat above-mentioned alternative expression and optimize process, until restraining or reaching the iterations set.
CN201610186444.8A 2016-03-25 2016-03-25 Distance preserving Hash method based on double-circuit neural network Pending CN105893477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610186444.8A CN105893477A (en) 2016-03-25 2016-03-25 Distance preserving Hash method based on double-circuit neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610186444.8A CN105893477A (en) 2016-03-25 2016-03-25 Distance preserving Hash method based on double-circuit neural network

Publications (1)

Publication Number Publication Date
CN105893477A true CN105893477A (en) 2016-08-24

Family

ID=57014415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610186444.8A Pending CN105893477A (en) 2016-03-25 2016-03-25 Distance preserving Hash method based on double-circuit neural network

Country Status (1)

Country Link
CN (1) CN105893477A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117054396A (en) * 2023-10-11 2023-11-14 天津大学 Raman spectrum detection method and device based on double-path multiplicative neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7574409B2 (en) * 2004-11-04 2009-08-11 Vericept Corporation Method, apparatus, and system for clustering and classification
CN104346440A (en) * 2014-10-10 2015-02-11 浙江大学 Neural-network-based cross-media Hash indexing method
CN105279554A (en) * 2015-09-29 2016-01-27 东方网力科技股份有限公司 Depth neural network training method and device based on Hash coding layer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7574409B2 (en) * 2004-11-04 2009-08-11 Vericept Corporation Method, apparatus, and system for clustering and classification
CN104346440A (en) * 2014-10-10 2015-02-11 浙江大学 Neural-network-based cross-media Hash indexing method
CN105279554A (en) * 2015-09-29 2016-01-27 东方网力科技股份有限公司 Depth neural network training method and device based on Hash coding layer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林悦: "基于哈希算法的高维数据的最近邻检索", 《中国优秀硕士学位论文全文数据库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117054396A (en) * 2023-10-11 2023-11-14 天津大学 Raman spectrum detection method and device based on double-path multiplicative neural network
CN117054396B (en) * 2023-10-11 2024-01-05 天津大学 Raman spectrum detection method and device based on double-path multiplicative neural network

Similar Documents

Publication Publication Date Title
CN110866190B (en) Method and device for training neural network model for representing knowledge graph
CN108108854B (en) Urban road network link prediction method, system and storage medium
CN107943938A (en) A kind of large-scale image similar to search method and system quantified based on depth product
US7668386B2 (en) Lossless compression algorithms for spatial data
CN107291785A (en) A kind of data search method and device
CN111737592B (en) Recommendation method based on heterogeneous propagation collaborative knowledge sensing network
US20220253722A1 (en) Recommendation system with adaptive thresholds for neighborhood selection
CN114610897A (en) Medical knowledge map relation prediction method based on graph attention machine mechanism
CN108399268B (en) Incremental heterogeneous graph clustering method based on game theory
US11899742B2 (en) Quantization method based on hardware of in-memory computing
CN107273471A (en) A kind of binary electric power time series data index structuring method based on Geohash
CN112633481A (en) Multi-hop graph convolution neural network model and training method thereof
CN106126668A (en) A kind of image characteristic point matching method rebuild based on Hash
Vithana et al. Private read update write (PRUW) with storage constrained databases
CN105893477A (en) Distance preserving Hash method based on double-circuit neural network
WO2023279685A1 (en) Method for mining core users and core items in large-scale commodity sales
CN115186108A (en) Processing method, system, device and medium for knowledge graph completion
CN111091475B (en) Social network feature extraction method based on non-negative matrix factorization
CN104867164A (en) Vector quantization codebook designing method based on genetic algorithm
CN110674335B (en) Hash code and image bidirectional conversion method based on multiple generation and multiple countermeasures
CN110909027B (en) Hash retrieval method
CN114936296B (en) Indexing method, system and computer equipment for super-large-scale knowledge map storage
Sciriha et al. Fast algorithms for indices of nested split graphs approximating real complex networks
CN116738009B (en) Method for archiving and backtracking data
CN105022836A (en) Compact depth CNN characteristic indexing method based on SIFT insertion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160824