CN108399211B - Large-scale image retrieval algorithm based on binary characteristics - Google Patents
Large-scale image retrieval algorithm based on binary characteristics Download PDFInfo
- Publication number
- CN108399211B CN108399211B CN201810106624.XA CN201810106624A CN108399211B CN 108399211 B CN108399211 B CN 108399211B CN 201810106624 A CN201810106624 A CN 201810106624A CN 108399211 B CN108399211 B CN 108399211B
- Authority
- CN
- China
- Prior art keywords
- real
- similarity
- loss function
- difference
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004422 calculation algorithm Methods 0.000 title abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 46
- 239000011159 matrix material Substances 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 238000005259 measurement Methods 0.000 claims abstract description 5
- 238000000691 measurement method Methods 0.000 claims description 7
- 238000009795 derivation Methods 0.000 claims description 5
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 47
- 238000005457 optimization Methods 0.000 description 11
- 238000011524 similarity measure Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002040 relaxant effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Library & Information Science (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a large-scale image retrieval algorithm based on binary characteristics, which comprises the following steps: step S1: initializing neural network parameters, and initializing real-value output characteristics according to a training picture set; step S2: constructing a picture similarity matrix according to the training picture set, and constructing a Laplace matrix; step S3: constructing a loss function through weighting similarity measurement; step S4: the real-valued output characteristics are derived through a loss function, the real-valued output characteristics are updated by fixing the difference quantity, and meanwhile, the network parameters are updated; step S5: the difference is derived through a loss function, and the real-value output characteristic is fixed to update the difference; step S6: the high-order expansion weight is increased, and the real-valued output features and the network parameters are continuously updated according to steps S3 and S4 in combination with the loss function until the training is finished. The method can effectively compensate the problem caused by the imbalance of the positive training sample and the negative training sample in the input data pair, and effectively improve the retrieval precision.
Description
Technical Field
The invention relates to the technical field of computer vision and multimedia, in particular to a large-scale image retrieval algorithm based on binary characteristics.
Background
Image retrieval is a data search method used to find pictures. The system searches some pictures similar to the input pictures from the database according to the retrieval information such as keywords, pictures and the like input by the user and feeds back the pictures to the user. The measure of similarity may be based on picture auxiliary information (e.g. keywords) or picture content characteristics such as picture texture, color, shape, etc.
Content-based image retrieval is an application of computational vision in the field of image retrieval. Such algorithms aim to avoid retrieving images using textual information, but rather by the characteristics of the texture, color, shape, etc. of the picture itself. Such algorithms require calculation of euclidean distances in feature space for the search image and the database image. In large-scale datasets, both the storage overhead using real-valued features and the time overhead of calculating euclidean distances at the time of retrieval are unacceptable.
The hash-based image retrieval can solve the problem of excessive time and storage overhead. Hash-based image retrieval algorithms store and retrieve images using binary features rather than real-valued features. The distance calculation between the binary features can be quickly realized by utilizing exclusive-or operation, and meanwhile, because each bit in the binary features only needs 1-bit storage space, the storage cost of the database picture features can be obviously reduced. The binary feature is called a hash feature, a function mapped from an original space to a hamming space is called a hash function, and a process of learning to obtain the hash feature is called hash learning.
The biggest difficulty in hash learning is that the optimization problem of solving the optimal hash characteristics is an NP problem. This is determined by the property that the value of the hash feature can only take 0,1 or ± 1. The integer optimization problem cannot solve the optimal solution through a traditional numerical optimization method, and therefore constraint conditions need to be relaxed. There are three main types of relaxation methods: and directly discarding the binary constraint, introducing a quantization error optimization term, and relaxing the step function into a sigmoid function. The first method is directly neglected by the constraint condition, so that the learned hash function has huge quantization error. The second method needs to introduce real-valued hidden layer characteristics and other auxiliary variables, decompose the original integer optimization problem into a plurality of solvable subproblems, and seek a local optimal solution through a step-by-step iterative optimization method. Sometimes, the sub-optimization problem of the hash feature is still an NP-hard problem without a closed-form solution, and a local optimal solution of the sub-problem needs to be converged to by a coordinated descent method. In the third method, the convergence speed of the training model is obviously reduced due to the introduction of a nonlinear function. In any of the above methods, there is always a difference between the trained hash function Φ and the hash function Ψ ═ sgn (Φ) actually used. This can result in a reduction in the effectiveness of the search on data outside the sample set.
Furthermore, when the training data is given in the form of pairs of pairwise data, previous approaches have defined the supervision flags of similar pairs of pictures as 1 and the supervision flags of dissimilar pairs of pictures as 0 or-1. The imbalance between positive and negative training samples is caused because the number of similarities in the pair of pairwise data constructed from most training sets is always much smaller than the number of dissimilarities.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the invention aims to provide a large-scale image retrieval algorithm based on binary characteristics, and the method can effectively improve the retrieval precision.
In order to achieve the above object, an embodiment of the present invention provides a large-scale image retrieval algorithm based on binary features, including the following steps: step S1: initializing neural network parameters, and initializing real-value output characteristics according to a training picture set; step S2: constructing a picture similarity matrix according to the training picture set, and constructing a Laplace matrix; step S3: constructing a loss function through weighting similarity measurement; step S4: the real-valued output characteristics are derived through the loss function, the fixed difference quantity updates the real-valued output characteristics, and meanwhile, network parameters are updated; step S5: the difference is derived through the loss function, and the difference is updated by fixing the real-value output characteristic; step S6: the higher-order unfolding weights are increased, and the real-valued output features and the network parameters are continuously updated according to the steps S3 and S4 in combination with the loss function until the training is finished.
According to the large-scale image retrieval algorithm based on the binary characteristics, an original binary optimization problem in Hash learning is converted into an optimization problem which can be guided to a Hash function, a binary constraint and similarity keeping target in Hash learning is decoupled, so that the converted problem can be optimized and solved through a simple interactive iteration frame, the problem caused by unbalance of positive and negative training samples in an input data pair is effectively compensated through a weighting similarity measurement method, the problem that the Hash function obtained through training is inconsistent with the Hash function used actually is effectively solved, and retrieval precision is improved.
Further, in an embodiment of the present invention, the step S1 further includes: step S101: initializing neural network parameters; step S102: and acquiring real-value output characteristics of pictures in a training picture set by using the initialized neural network, taking the real-value output characteristics as the initial real-value output characteristics H and the discrete output characteristics B as sgn (H), and taking the discrete output characteristics B as picture hash codes.
Further, the method can be used for preparing a novel materialIn one embodiment of the present invention, the step S2 includes the following steps: step S201: calculating the similarity of any two pictures in the training picture set through a binary similarity function, and recording the similarity of the ith picture and the jth picture as sijThe picture similarity matrix formed by the method is marked as S; step S202: obtaining a Laplace matrix of the picture similarity matrix S, and recording the Laplace matrix as Lsym。
Further, in an embodiment of the present invention, the step S202 specifically includes:
Further, in an embodiment of the present invention, the step S3 includes the following steps:
step S301: compensating imbalance of positive and negative training samples by using a weighted similarity measurement method, calculating weighted similarity of any two pictures in the training picture set through the similarity, and recording the weighted similarity of the ith picture and the jth picture asThe weighted similarity matrix of the two sets is recorded as
Step S302: for any pictures i and j in the training picture set, according to the discrete output characteristic biAnd bjAnd constructing a loss function:
step S303: summing all sample pairs in the training picture set to construct a loss function:
the loss function matrix form is:
Step S304: defining a difference Δ ═ B-H, according to a taylor series, the loss function develops at the real-valued output characteristic H as follows:
wherein, if the real-valued output characteristic H and the difference quantity delta are connected with each element in a row manner, thenIs the ith column vector of real-valued output features H,is the ith column vector of the difference Δ;
step S305: according to the expanded form of step S304, the loss function in step S303 is:
step S306: combining the step S303 and the step S305, constructing the loss function with respect to the real-valued output feature H and the difference Δ:
wherein, (H + delta) epsilon { -1,1}n×lWherein λ is1And λ2Weights are expanded for higher orders.
Further, in an embodiment of the present invention, the step S301 specifically includes:
for any pictures i and j in the training picture set, according to the similarity sijCalculating the weighted similarity:
in order to ensure that the similarity of the similar pictures is a positive value, and the similarity of the non-similar pictures is a negative value in the similarity measurement, let β <1 > be 0, and if β is 0.5, the original similarity is obtained by 0.5-fold scaling, and a weighted similarity measurement method is not used.
Further, in an embodiment of the present invention, the step S4 includes the following steps:
step S401: fixing a difference quantity, constructing the loss function with respect to the real-valued output features:
step S402: fixing the difference, calculating the derivative of the loss function on the real-valued output characteristic:
step S403: and updating the real-valued output characteristics and the network parameters according to a random gradient descent method.
Further, in an embodiment of the present invention, the step S5 includes the following steps:
step S501: fixing the real-valued output characteristics, constructing a loss function with respect to the difference quantity:
Step S502: and (3) fixing real-value output characteristics, calculating the derivation of the loss function on the difference quantity:
step S503: the difference amount Δ is updated.
Further, in an embodiment of the present invention, the step S503 specifically includes:
Further, in an embodiment of the present invention, the step S6 includes the following steps: step S601: increasing the higher order expansion weight λ1And λ2(ii) a Step S602: according to the steps S4 and S5, the real-valued output features and the network parameters are continuously updated in combination with the loss function until the training is finished.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a large scale image retrieval algorithm based on binary features according to an embodiment of the present invention;
FIG. 2 is a flowchart of a large-scale image retrieval algorithm based on binary features according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A proposed binary-feature-based large-scale image retrieval algorithm according to an embodiment of the present invention is described below with reference to the accompanying drawings.
FIG. 1 is a flowchart of a large-scale image retrieval algorithm based on binary features according to an embodiment of the present invention.
As shown in fig. 1 and fig. 2, the binary-feature-based large-scale image retrieval algorithm includes the following steps:
step S1: initializing neural network parameters, and initializing real-value output characteristics according to the training picture set.
Further, in an embodiment of the present invention, the step S1 further includes: step S101: initializing neural network parameters; step S102: and acquiring real-value output characteristics of the pictures in the training picture set by using the initialized neural network, taking the real-value output characteristics as initial real-value output characteristics H and discrete output characteristics B as sgn (H), and taking the discrete output characteristics B as picture hash codes.
Step S2: and constructing a picture similarity matrix according to the training picture set, and constructing a Laplace matrix.
Further, in an embodiment of the present invention, the step S2 includes the following steps: step S201: calculating the similarity of any two pictures in the training picture set through a binary similarity function, and recording the similarity of the ith picture and the jth picture as sijThe picture similarity matrix formed by the method is marked as S; step S202: obtain the Laplace matrix of the picture similarity matrix S, and record as Lsym。
Further, in an embodiment of the present invention, step S202 specifically includes:
Step S3: the loss function is constructed by weighting the similarity measure.
Further, in an embodiment of the present invention, the step S3 includes the following steps:
step S301: using a weighted similarity measurement method to compensate the imbalance of positive and negative training samples, calculating the weighted similarity of any two pictures in a training picture set through the similarity, and recording the weighted similarity of the ith picture and the jth picture asThe weighted similarity matrix of the two sets is recorded as
Step S302: for any pictures i and j in the training picture set, according to the discrete output characteristic biAnd bjAnd constructing a loss function:
step S303: summing all sample pairs in the training picture set to construct a loss function:
the loss function matrix form is:
Step S304: defining the difference Δ ═ B-H, the loss function develops at the real-valued output characteristic H according to a taylor series as follows:
wherein, if the real-valued output characteristic H and the difference quantity delta are connected with each element in a row manner, thenIs the ith column vector of real-valued output features H,is the ith column vector of the difference Δ;
step S305: according to the expanded form of step S304, the loss function in step S303 is:
step S306: combining step S303 and step S305, a loss function is constructed for the real-valued output feature H and the difference Δ:
wherein, (H + delta) epsilon { -1,1}n×lWherein λ is1And λ2Weights are expanded for higher orders.
Further, in an embodiment of the present invention, step S301 specifically includes:
for any pictures i and j in the training picture set, according to the similarity sijCalculating the weighted similarity:
in order to ensure that the similarity measure has a positive similarity for similar pictures and a negative similarity for non-similar pictures, let β <1 > be 0, and if β is 0.5, obtain the original similarity by 0.5-fold scaling, a weighted similarity measure method is not used.
Step S4: and (4) the real-valued output characteristics are derived through a loss function, the real-valued output characteristics are updated by fixing the difference quantity, and meanwhile, the network parameters are updated.
Further, in an embodiment of the present invention, the step S4 includes the following steps:
step S401: fixing the difference quantity, constructing a loss function with respect to the real-valued output features:
step S402: fixing the difference, calculating the derivation of the loss function on the real-valued output characteristics:
step S403: and updating the real-valued output characteristics and the network parameters according to a random gradient descent method.
Step S5: and (4) carrying out derivation on the difference quantity through a loss function, and fixing the real-value output characteristic to update the difference quantity.
Further, in an embodiment of the present invention, the step S5 includes the following steps:
step S501: and (3) fixing real-value output characteristics, constructing a loss function about the difference:
Step S502: fixing real value output characteristics, calculating the derivation of the loss function on the difference quantity:
step S503: the difference amount Δ is updated.
Further, in an embodiment of the present invention, step S503 specifically includes:
Step S6: the high-order expansion weight is increased, and the real-valued output features and the network parameters are continuously updated according to steps S3 and S4 in combination with the loss function until the training is finished.
Further, in an embodiment of the present invention, the step S6 includes the following steps: step S601: increasing the higher order expansion weight λ1And λ2(ii) a Step S602: according to steps S4 and S5, the real-valued output features and the network parameters are continuously updated by combining the loss function until the training is finished.
According to the large-scale image retrieval algorithm based on the binary characteristics, the original binary optimization problem in the Hash learning is converted into the optimization problem which can be guided to the Hash function, the binary constraint and the similarity keeping target in the Hash learning are decoupled, the converted problem can be optimized and solved through a simple interactive iteration framework, the problem caused by unbalance of positive and negative training samples in an input data pair is effectively compensated through a weighted similarity measurement method, the problem that the Hash function obtained through training is inconsistent with the Hash function used actually is effectively solved, and the retrieval precision is improved.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (7)
1. A large-scale image retrieval method based on binary characteristics is characterized by comprising the following steps:
step S1: initializing neural network parameters, and initializing real-value output characteristics according to a training picture set;
step S2: constructing a picture similarity matrix according to the training picture set, and constructing a Laplace matrix;
step S3: constructing a loss function through weighting similarity measurement;
step S4: the real-valued output characteristics are derived through the loss function, the fixed difference quantity updates the real-valued output characteristics, and meanwhile, network parameters are updated;
step S5: the difference is derived through the loss function, and the difference is updated by fixing the real-value output characteristic; and
step S6: increasing the high-order expansion weight, and continuously updating the real-valued output features and the network parameters according to the step S4 and the step S5 in combination with the loss function until the training is finished;
wherein the step S1 further includes:
step S101: initializing neural network parameters;
step S102: acquiring real-value output characteristics of pictures in a training picture set by using the initialized neural network, taking the real-value output characteristics as initial real-value output characteristics H and discrete output characteristics B as sgn (H), and taking the discrete output characteristics B as picture hash codes;
the step S2 includes the steps of:
step S201: calculating the similarity of any two pictures in the training picture set through a binary similarity function, and recording the similarity of the ith picture and the jth picture as sijThe picture similarity matrix formed by the method is marked as S;
step S202: obtaining a Laplace matrix of the picture similarity matrix S, and recording the Laplace matrix as Lsym;
The step S3 includes the steps of:
step S301: compensating imbalance of positive and negative training samples by using a weighted similarity measurement method, calculating weighted similarity of any two pictures in the training picture set through the similarity, and recording the weighted similarity of the ith picture and the jth picture asThe weighted similarity matrix of the two sets is recorded as
Step S302: for any pictures i and j in the training picture set, according to the discrete output characteristic biAnd bjAnd constructing a loss function:
step S303: summing all sample pairs in the training picture set to construct a loss function:
the loss function matrix form is:
Step S304: defining a difference Δ ═ B-H, according to a taylor series, the loss function develops at the real-valued output characteristic H as follows:
wherein, if the real-valued output characteristic H and the difference quantity delta are connected with each element in a row manner, thenIs the ith column vector of real-valued output features H,is the ith column vector of the difference Δ;
step S305: according to the expanded form of step S304, the loss function in step S303 is:
step S306: combining the step S303 and the step S305, constructing the loss function with respect to the real-valued output feature H and the difference Δ:
wherein, (H + delta) epsilon { -1,1}n×lWherein λ is1And λ2Weights are expanded for higher orders.
3. The method for retrieving a large-scale image based on binary features according to claim 1, wherein the step S301 specifically comprises:
for any pictures i and j in the training picture set, according to the similarity sijCalculating the weighted similarity:
in order to ensure that the similarity of the similar pictures is a positive value, and the similarity of the non-similar pictures is a negative value in the similarity measurement, let β <1 > be 0, and if β is 0.5, the original similarity is obtained by 0.5-fold scaling, and a weighted similarity measurement method is not used.
4. The binary-feature-based large-scale image retrieval method according to claim 1 or 3, wherein the step S4 includes the steps of:
step S401: fixing a difference quantity, constructing the loss function with respect to the real-valued output features:
step S402: fixing the difference, calculating the derivative of the loss function on the real-valued output characteristic:
step S403: and updating the real-valued output characteristics and the network parameters according to a random gradient descent method.
5. The binary-feature-based large-scale image retrieval method according to claim 4, wherein the step S5 comprises the steps of:
step S501: fixing the real-valued output characteristics, constructing a loss function with respect to the difference quantity:
Step S502: and (3) fixing real-value output characteristics, calculating the derivation of the loss function on the difference quantity:
step S503: the difference amount Δ is updated.
7. The binary-feature-based large-scale image retrieval method according to claim 5, wherein the step S6 comprises the steps of:
step S601: increasing the higher order expansion weight λ1And λ2;
Step S602: according to the steps S4 and S5, the real-valued output features and the network parameters are continuously updated in combination with the loss function until the training is finished.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810106624.XA CN108399211B (en) | 2018-02-02 | 2018-02-02 | Large-scale image retrieval algorithm based on binary characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810106624.XA CN108399211B (en) | 2018-02-02 | 2018-02-02 | Large-scale image retrieval algorithm based on binary characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108399211A CN108399211A (en) | 2018-08-14 |
CN108399211B true CN108399211B (en) | 2020-11-24 |
Family
ID=63096218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810106624.XA Active CN108399211B (en) | 2018-02-02 | 2018-02-02 | Large-scale image retrieval algorithm based on binary characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108399211B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109918532B (en) * | 2019-03-08 | 2023-08-18 | 苏州大学 | Image retrieval method, device, equipment and computer readable storage medium |
CN109977194B (en) * | 2019-03-20 | 2021-08-10 | 华南理工大学 | Text similarity calculation method, system, device and medium based on unsupervised learning |
CN113157739B (en) * | 2021-04-23 | 2024-01-09 | 平安科技(深圳)有限公司 | Cross-modal retrieval method and device, electronic equipment and storage medium |
CN113705589A (en) * | 2021-10-29 | 2021-11-26 | 腾讯科技(深圳)有限公司 | Data processing method, device and equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103336801A (en) * | 2013-06-20 | 2013-10-02 | 河海大学 | Multi-feature locality sensitive hashing (LSH) indexing combination-based remote sensing image retrieval method |
CN107085585A (en) * | 2016-02-12 | 2017-08-22 | 奥多比公司 | Accurate label dependency prediction for picture search |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069173B (en) * | 2015-09-10 | 2019-04-19 | 天津中科智能识别产业技术研究院有限公司 | The fast image retrieval method of Hash is kept based on the topology for having supervision |
CN105469096B (en) * | 2015-11-18 | 2018-09-25 | 南京大学 | A kind of characteristic bag image search method based on Hash binary-coding |
CN105512289B (en) * | 2015-12-07 | 2018-08-14 | 郑州金惠计算机***工程有限公司 | Image search method based on deep learning and Hash |
CN106021364B (en) * | 2016-05-10 | 2017-12-12 | 百度在线网络技术(北京)有限公司 | Foundation, image searching method and the device of picture searching dependency prediction model |
-
2018
- 2018-02-02 CN CN201810106624.XA patent/CN108399211B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103336801A (en) * | 2013-06-20 | 2013-10-02 | 河海大学 | Multi-feature locality sensitive hashing (LSH) indexing combination-based remote sensing image retrieval method |
CN107085585A (en) * | 2016-02-12 | 2017-08-22 | 奥多比公司 | Accurate label dependency prediction for picture search |
Also Published As
Publication number | Publication date |
---|---|
CN108399211A (en) | 2018-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399211B (en) | Large-scale image retrieval algorithm based on binary characteristics | |
CN110059198B (en) | Discrete hash retrieval method of cross-modal data based on similarity maintenance | |
Zhu et al. | Discrete multimodal hashing with canonical views for robust mobile landmark search | |
US20210191990A1 (en) | Efficient cross-modal retrieval via deep binary hashing and quantization | |
CN106777318B (en) | Matrix decomposition cross-modal Hash retrieval method based on collaborative training | |
CN109858015B (en) | Semantic similarity calculation method and device based on CTW (computational cost) and KM (K-value) algorithm | |
CN111160564B (en) | Chinese knowledge graph representation learning method based on feature tensor | |
CN111753190B (en) | Meta-learning-based unsupervised cross-modal hash retrieval method | |
CN109284411B (en) | Discretization image binary coding method based on supervised hypergraph | |
US20120136650A1 (en) | Suggesting spelling corrections for personal names | |
CN108399268B (en) | Incremental heterogeneous graph clustering method based on game theory | |
CN111274424B (en) | Semantic enhanced hash method for zero sample image retrieval | |
CN112395438A (en) | Hash code generation method and system for multi-label image | |
CN103488713A (en) | Cross-modal search method capable of directly measuring similarity of different modal data | |
Cintia Ganesha Putri et al. | Design of an unsupervised machine learning-based movie recommender system | |
US20120109964A1 (en) | Adaptive multimedia semantic concept classifier | |
Zamiri et al. | MVDF-RSC: Multi-view data fusion via robust spectral clustering for geo-tagged image tagging | |
CN112800344B (en) | Deep neural network-based movie recommendation method | |
CN108920647B (en) | Low-rank matrix filling TOP-N recommendation method based on spectral clustering | |
CN110083732B (en) | Picture retrieval method and device and computer storage medium | |
KR101467707B1 (en) | Method for instance-matching in knowledge base and device therefor | |
WO2020049666A1 (en) | Time-series data processing device | |
CN116383437A (en) | Cross-modal material recommendation method based on convolutional neural network | |
CN116821519A (en) | Intelligent recommendation method for system filtering and noise reduction based on graph structure | |
WO2023279685A1 (en) | Method for mining core users and core items in large-scale commodity sales |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |