CN114491122B - Picture matching method for similar image retrieval - Google Patents
Picture matching method for similar image retrieval Download PDFInfo
- Publication number
- CN114491122B CN114491122B CN202111634430.5A CN202111634430A CN114491122B CN 114491122 B CN114491122 B CN 114491122B CN 202111634430 A CN202111634430 A CN 202111634430A CN 114491122 B CN114491122 B CN 114491122B
- Authority
- CN
- China
- Prior art keywords
- edge
- graph
- matrix
- point
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims abstract description 54
- 238000012549 training Methods 0.000 claims abstract description 19
- 238000013528 artificial neural network Methods 0.000 claims abstract description 15
- 238000005516 engineering process Methods 0.000 claims abstract description 14
- 239000013598 vector Substances 0.000 claims abstract description 10
- 238000013135 deep learning Methods 0.000 claims abstract description 9
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 4
- 238000010276 construction Methods 0.000 claims abstract description 4
- 238000009826 distribution Methods 0.000 claims description 9
- 230000002776 aggregation Effects 0.000 claims description 6
- 238000004220 aggregation Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 4
- 241000283690 Bos taurus Species 0.000 claims description 3
- 241000282472 Canis lupus familiaris Species 0.000 claims description 3
- 241000282326 Felis catus Species 0.000 claims description 3
- 241001494479 Pecora Species 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000011161 development Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Library & Information Science (AREA)
- Databases & Information Systems (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a graph matching method for similar image retrieval, which mainly comprises two stages of offline data set construction and online deep learning training: the first stage comprises selecting a Pascal VOC data set as a training data set; and selecting a plurality of images which are provided with annotation points and cover all kinds of data sets as a training set. The second stage comprises the following steps: adopting a pretrained VGG-16 neural network as a feature extractor; generating a topological structure of a bidirectional edge by each image through a fully connected Delaunay triangulation technology; after the point feature embedding of the topological geometrical information is completed, carrying out the feature description of the edges on the basis of the point-edge association matrix; according to the edge characteristic description vector of each graph, an edge-to-edge similarity matrix can be constructed; through the steps, final point feature sums can be obtained, and then a similarity matrix of point-point matching is calculated. The scheme also has the advantages of high retrieval performance, high efficiency and easy implementation.
Description
Technical Field
The invention relates to the technical field of image retrieval, in particular to a graph matching method for similar image retrieval.
Background
With the development of the internet, how to efficiently retrieve images meeting the demands of users in a network environment is a core technical problem. In general, the image retrieval technique is mainly divided into two branches: text-based and content-based retrieval. Text-based image retrieval typically queries images in the form of keywords or browses images under a specific category according to a hierarchical directory. While content-based image retrieval is the retrieval of other images with similar characteristics from an image database based on the semantic content and characteristics of the images.
The existing content-based image retrieval system firstly extracts the characteristic information of the image content, stores the characteristic information in a characteristic library, and then compares and sorts related characteristics according to the characteristics of the query image, so as to obtain the retrieval result of the image. The content-based image retrieval technology uses a computer to carry out unified and regular mathematical description on images, so that the manpower consumption for manually labeling the image keywords is reduced, and the retrieval efficiency is improved. With the improvement of computer performance and the development of deep learning, the computer can extract rich features such as object color, shape and structure from the image. However, matching the similarity of the structured feature information is a problem with high computational complexity.
From the mathematical optimization perspective, graph matching of structured information belongs to the NP-hard second order combination problem. Graph matching aims at searching for the corresponding relation between nodes among objects by utilizing graph structure information. On the other hand, the explosive deep learning and graph rolling neural networks have great potential in graph matching problems. By means of graph embedding technology based on graph convolution neural network, the second-order combination problem which is difficult to accurately solve in polynomial time is converted into the first-order problem which can be accurately solved in polynomial time. However, the existing depth map matching method based on the map embedding technology does not consider the second-order edge-to-edge similarity information, and the method introduces the information as cross-map embedding information, so that the precision and efficiency are improved. For this reason, the prior art needs further improvements and perfection.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a graph matching method for similar image retrieval.
The aim of the invention is achieved by the following technical scheme:
the image matching method for similar image retrieval mainly comprises two stages of offline data set construction and online deep learning training, and comprises the following specific steps:
stage one: and constructing a data set matched with the offline depth image.
Step S1: the Pascal VOC dataset was chosen as the training dataset.
Step S2: and selecting a plurality of images which are provided with annotation points and cover all kinds of data sets as a training set.
Stage two: the depth map matching network is trained online.
Step S3: the pretrained VGG-16 neural network is used as a feature extractor, and parameters of the neural network are trained on an ImageNet data set in advance.
Step S4: each image is subjected to fully connected delaunay triangulation technology to generate a topological structure of a bidirectional edge.
Step S5: after the point feature embedding of the topological geometrical information is completed, the feature description of the edges is carried out on the basis of the point-edge association matrix.
Step S6: according to the edge characteristic description vector of each graph, an edge-to-edge similarity matrix K can be constructed e 。
Step S61: the point-edge pairing relationships of graph matching can be constructed into a correlation graph model.
Step S62: according to the topological structure of the association graph, the edge-to-edge similarity score and the point-to-point similarity can be associated to obtain a cross-graph conversion matrix.
Step S63: and taking the cross-graph distribution matrix as prior information to perform cross-graph point embedding operation.
Step S7: through the steps, the final point characteristics can be obtainedAnd->Then a similarity matrix of the point-to-point matches is calculated.
As a preferred embodiment of the present invention, the step S3 further includes the steps of: the two images to be matched are obtained through a feature extractorAnd->Where d is the dimension of the feature vector, n 1 And n 2 The number of the characteristic points of the two images is respectively; f (F) 1 And F 2 The outputs extracted from the layers relu4_2 and relu5_1 of the VGG-16 neural network are then spliced.
As a preferred embodiment of the present invention, the step S4 further includes the steps of: the attribute of each side is composed of normalized two endpoint coordinates, and the connection information of the side represents the topological structure information of each graph; then, the point characteristic information and the side attribute information are input into a graphic neural network SplineCNN as input information; the SplineCNN is used as a geometric topology information embedding technology, and MAX aggregation is adopted in structure information aggregation; finally obtaining the point characteristics embedded with the respective geometric topology informationAnd->
As a preferred embodiment of the present invention, the step S5 further includes the steps of: the point-side association matrices of the two figures are respectivelyAnd->Wherein e 1 And e 2 The number of edges respectively representing the two graphs, when G i,k =H j,k When=1, it means that the edge k starts from the node i to the node j ends; edge characteristics->And->Is defined as follows:
as a preferred embodiment of the present invention, the step S6 further includes the steps of: edge-to-edge correspondence matrix K e :
Wherein,,is a training parameter; k (K) e Each element of the matrix represents edge-to-edge matching information, and in order to expand the difference of edge-to-edge similarity values, namely, emphasize a value with high similarity and compress a value with low similarity, normalization operation is performed on the Ke matrix to obtain a normalized epsilon matrix:
ε=softmax(K e ) Formula (3)
Then, the normalized epsilon matrix is converted into a cross-edge conversion matrix through the structure of the companion graph
Based on cross-map transformation matrixThe cross-graph feature embedded information can be obtained; for node-> Cross-map feature information m j→i Is calculated as follows:
finally, vector addition operation is carried out on the cross-graph characteristic information and the point characteristic information:
a similar operation is also performed for the feature points of the second graph.
As a preferred embodiment of the present invention, the step S7 further includes the steps of: the similarity matrix formula is as follows:
the linear solution to the graph matching problem is based on the Sinkhorn iterative algorithm, which is to normalize the score matrix S sequentially along the rows and along the columns to obtain a soft distribution matrix
P ij =Sinkhorn(exp(S ij ) Equation (8).
As a preferred embodiment of the present invention, the graph matching method further includes step S8: given the truth distribution matrixAnd a soft allocation matrix P, the error can be obtained by constructing a cross entropy loss function:
as a preferred embodiment of the present invention, the step S1 further includes the steps of: the dataset contains several different categories of images: aircraft, bicycles, birds, boats, bottles, buses, automobiles, cats, chairs, cattle, tables, dogs, horses, motorcycles, humans, plants, sheep, sofas, trains, televisions; each image contains 6 to 23 annotated feature point image coordinates.
As a preferred embodiment of the present invention, the step S2 further includes the steps of: 1682 sheets were selected accordingly as test sets. For each image to be trained, extracting a boundary box containing all annotation feature points, adjusting the image size to 256×256, and finally entering the training of the deep learning network.
The working process and principle of the invention are as follows: aiming at the problem of precision loss caused by neglecting second-order edge and edge similarity information in the existing depth map matching scheme based on the map embedding technology, the scheme introduces the second-order edge and edge similarity information by using a model based on the depth map matching of the cross-map embedding technology, is applied to image retrieval of similar objects, improves matching performance, remarkably reduces cost in memory consumption and finally greatly improves performance and efficiency of image retrieval.
Drawings
Fig. 1 is a schematic flow chart of a graph matching method for similar image retrieval provided by the invention.
Fig. 2 is a schematic diagram of a graph matching method for similar image retrieval provided by the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clear and clear, the present invention will be further described below with reference to the accompanying drawings and examples.
Example 1:
as shown in fig. 1 to 2, the present embodiment discloses a graph matching method for similar image retrieval, and the graph matching method mainly includes two stages of offline data set construction and online deep learning training, and specifically includes the following steps:
stage one: and constructing a data set matched with the offline depth image.
Step S1: the Pascal VOC dataset was chosen as the training dataset.
Step S2: and selecting a plurality of images which are provided with annotation points and cover all kinds of data sets as a training set.
Stage two: the depth map matching network is trained online.
Step S3: the pretrained VGG-16 neural network is used as a feature extractor, and parameters of the neural network are trained on an ImageNet data set in advance.
Step S4: each image is subjected to fully connected delaunay triangulation technology to generate a topological structure of a bidirectional edge.
Step S5: after the point feature embedding of the topological geometrical information is completed, the feature description of the edges is carried out on the basis of the point-edge association matrix.
Step S6: according to the edge characteristic description vector of each graph, an edge-to-edge similarity matrix K can be constructed e 。
Step S61: the point-edge pairing relationships of graph matching can be constructed into a correlation graph model.
Step S62: according to the topological structure of the association graph, the edge-to-edge similarity score and the point-to-point similarity can be associated to obtain a cross-graph conversion matrix.
Step S63: and taking the cross-graph distribution matrix as prior information to perform cross-graph point embedding operation.
Step S7: through the steps, the final point characteristics can be obtainedAnd->Then a similarity matrix of the point-to-point matches is calculated.
As a preferred embodiment of the present invention, the step S3 further includes the steps of: the two images to be matched are obtained through a feature extractorAnd->Where d is the dimension of the feature vector, n 1 And n 2 The number of the characteristic points of the two images is respectively; f (F) 1 And F 2 The outputs extracted from the layers relu4_2 and relu5_1 of the VGG-16 neural network are then spliced.
As a preferred embodiment of the present invention, the step S4 further includes the steps of: the attribute of each side is composed of normalized two endpoint coordinates, and the connection information of the side represents the topological structure information of each graph; then, the point characteristic information and the side attribute information are input into a graphic neural network SplineCNN as input information; the SplineCNN is used as a geometric topology information embedding technology, and MAX aggregation is adopted in structure information aggregation; finally obtaining the point characteristics embedded with the respective geometric topology informationAnd->
As a preferred embodiment of the present invention, the step S5 further includes the steps of: the point-side association matrices of the two figures are respectivelyAnd->Wherein e 1 And e 2 The number of edges respectively representing the two graphs, when G i,k =H j,k When=1, it means that the edge k starts from the node i to the node j ends; edge characteristics->And->Is defined as follows:
as a preferred embodiment of the present invention, the step S6 further includes the steps of: edge-to-edge correspondence matrix K e :
Wherein,,is a training parameter; k (K) e Each element of the matrix represents edge-to-edge matching information, and in order to expand the difference of edge-to-edge similarity values, namely, emphasize a value with high similarity and compress a value with low similarity, normalization operation is performed on the Ke matrix to obtain a normalized epsilon matrix:
ε=softmax(K e ) Formula (3)
Then, the normalized epsilon matrix is converted into a cross-edge conversion matrix through the structure of the companion graph
Based on cross-map transformation matrixThe cross-graph feature embedded information can be obtained; for node-> Cross-map feature information m j→i Is calculated as follows:
finally, vector addition operation is carried out on the cross-graph characteristic information and the point characteristic information:
a similar operation is also performed for the feature points of the second graph.
As a preferred embodiment of the present invention, the step S7 further includes the steps of: the similarity matrix formula is as follows:
the linear solution to the graph matching problem is based on the Sinkhorn iterative algorithm, which is to normalize the score matrix S sequentially along the rows and along the columns to obtain a soft distribution matrix
P ij =Sinkhorn(exp(S ij ) Equation (8).
As a preferred embodiment of the present invention, the graph matching method further includes step S8: given the truth distribution matrixAnd a soft allocation matrix P, the error can be obtained by constructing a cross entropy loss function:
as a preferred embodiment of the present invention, the step S1 further includes the steps of: the dataset contains several different categories of images: aircraft, bicycles, birds, boats, bottles, buses, automobiles, cats, chairs, cattle, tables, dogs, horses, motorcycles, humans, plants, sheep, sofas, trains, televisions; each image contains 6 to 23 annotated feature point image coordinates.
As a preferred embodiment of the present invention, the step S2 further includes the steps of: 1682 sheets were selected accordingly as test sets. For each image to be trained, extracting a boundary box containing all annotation feature points, adjusting the image size to 256×256, and finally entering the training of the deep learning network.
The working process and principle of the invention are as follows: aiming at the problem of precision loss caused by neglecting second-order edge and edge similarity information in the existing depth map matching scheme based on the map embedding technology, the scheme introduces the second-order edge and edge similarity information by using a model based on the depth map matching of the cross-map embedding technology, is applied to image retrieval of similar objects, improves matching performance, remarkably reduces cost in memory consumption and finally greatly improves performance and efficiency of image retrieval.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.
Claims (6)
1. The image matching method for similar image retrieval is characterized by mainly comprising two stages of offline data set construction and online deep learning training, and comprises the following specific steps:
stage one: constructing a data set matched with the offline depth image;
step S1: selecting a Pascal VOC data set as a training data set;
step S2: selecting a plurality of images which are provided with annotation points and cover all kinds of data sets as a training set;
stage two: training a depth map matching network on line;
step S3: the pre-trained VGG-16 neural network is adopted as a feature extractor, and parameters of the neural network are trained on an ImageNet data set in advance;
step S4: generating a topological structure of a bidirectional edge by each image through a fully connected Delaunay triangulation technology;
step S5: after the point feature embedding of the topological geometrical information is completed, carrying out the feature description of the edges on the basis of the point-edge association matrix;
step S6: constructing an edge-to-edge similarity matrix K according to the edge feature description vectors of the respective graphs e ;
Step S61: the point-edge pairing relation matched with the graph is constructed into a correlation graph model;
step S62: according to the topological structure of the association graph, associating the edge-to-edge similarity score with the point similarity to obtain a cross-graph conversion matrix;
step S63: taking the cross-graph distribution matrix as prior information to perform cross-graph point embedding operation;
step S7: through the steps, the final point characteristics can be obtainedAnd->Then calculating a similarity matrix of point-point matching;
the step S4 further includes the steps of: the attribute of each side is composed of normalized two endpoint coordinates, and the connection information of the side represents the topological structure information of each graph; then, the point characteristic information and the side attribute information are input into a graphic neural network SplineCNN as input information; the SplineCNN is used as a geometric topology information embedding technology, and MAX aggregation is adopted in structure information aggregation; finally obtaining the point characteristics embedded with the respective geometric topology informationAnd->
The step S5 further includes the steps of: the point-side association matrices of the two figures are respectively Andwherein n is 1 And n 2 The number of the characteristic points of the two images is respectively e 1 And e 2 The number of edges respectively representing the two graphs, when G i,k =H j,k When=1, it means that the edge k starts from the node i to the node j ends; edge characteristics->Andis defined as follows, where d is the dimension of the feature vector:
the step S6 further includes the steps of: edge-to-edge correspondence matrix K e :
Wherein,,is a training parameter; each element of the Ke matrix represents edge-to-edge matching information in order to expand edge-to-edge similarityThe difference of the degree values, that is, the value with high similarity is emphasized and the value with low similarity is compressed, the Ke matrix is normalized to obtain normalized +.>Matrix:
then, the normalized product isTransformation of matrix into cross-map transformation matrix by structure of companion map>
Based on cross-map transformation matrixWe get cross-graph feature embedding information; for node-> Cross-map feature information m j→i Is calculated as follows:
finally, vector addition operation is carried out on the cross-graph characteristic information and the point characteristic information:
the same is done for the feature points of the second graph.
2. The graph matching method for homogeneous image retrieval according to claim 1, wherein said step S3 further comprises the steps of: the two images to be matched are obtained through a feature extractor And->F 1 And F 2 The outputs extracted from the layers relu4_2 and relu5_1 of the VGG-16 neural network are then spliced.
3. The graph matching method for homogeneous image retrieval according to claim 1, wherein said step S7 further comprises the steps of: the similarity matrix formula is as follows:
the linear solution to the graph matching problem is based on the Sinkhorn iterative algorithm, which is to normalize the score matrix S sequentially along the rows and along the columns to obtain a soft distribution matrix
P ij =Sinkhorn(exp(S ij ) Equation (8).
5. the graph matching method for homogeneous image retrieval according to claim 1, wherein said step S1 further comprises the steps of: the dataset contains several different categories of images: aircraft, bicycles, birds, boats, bottles, buses, automobiles, cats, chairs, cattle, tables, dogs, horses, motorcycles, humans, plants, sheep, sofas, trains, televisions; each image contains 6 to 23 annotated feature point image coordinates.
6. The graph matching method for homogeneous image retrieval according to claim 1, wherein said step S2 further comprises the steps of: correspondingly, 1682 images are selected as a test set, for each image to be trained, a boundary box containing all annotation feature points is extracted, the image size is adjusted to 256×256, and finally training of the deep learning network is performed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111634430.5A CN114491122B (en) | 2021-12-29 | 2021-12-29 | Picture matching method for similar image retrieval |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111634430.5A CN114491122B (en) | 2021-12-29 | 2021-12-29 | Picture matching method for similar image retrieval |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114491122A CN114491122A (en) | 2022-05-13 |
CN114491122B true CN114491122B (en) | 2023-07-14 |
Family
ID=81496804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111634430.5A Active CN114491122B (en) | 2021-12-29 | 2021-12-29 | Picture matching method for similar image retrieval |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114491122B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063789B (en) * | 2022-05-24 | 2023-08-04 | 中国科学院自动化研究所 | 3D target detection method and device based on key point matching |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106126581A (en) * | 2016-06-20 | 2016-11-16 | 复旦大学 | Cartographical sketching image search method based on degree of depth study |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108595636A (en) * | 2018-04-25 | 2018-09-28 | 复旦大学 | The image search method of cartographical sketching based on depth cross-module state correlation study |
CN110263795B (en) * | 2019-06-04 | 2023-02-03 | 华东师范大学 | Target detection method based on implicit shape model and graph matching |
CN111488498A (en) * | 2020-04-08 | 2020-08-04 | 浙江大学 | Node-graph cross-layer graph matching method and system based on graph neural network |
CN112801206B (en) * | 2021-02-23 | 2022-10-14 | 中国科学院自动化研究所 | Image key point matching method based on depth map embedded network and structure self-learning |
-
2021
- 2021-12-29 CN CN202111634430.5A patent/CN114491122B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106126581A (en) * | 2016-06-20 | 2016-11-16 | 复旦大学 | Cartographical sketching image search method based on degree of depth study |
Also Published As
Publication number | Publication date |
---|---|
CN114491122A (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111914558B (en) | Course knowledge relation extraction method and system based on sentence bag attention remote supervision | |
CN106970910B (en) | Keyword extraction method and device based on graph model | |
WO2020143184A1 (en) | Knowledge fusion method and apparatus, computer device, and storage medium | |
CN109284406B (en) | Intention identification method based on difference cyclic neural network | |
Shi et al. | Deep adaptively-enhanced hashing with discriminative similarity guidance for unsupervised cross-modal retrieval | |
CN111522965A (en) | Question-answering method and system for entity relationship extraction based on transfer learning | |
CN111985228B (en) | Text keyword extraction method, text keyword extraction device, computer equipment and storage medium | |
CN105468596A (en) | Image retrieval method and device | |
CN109408578A (en) | One kind being directed to isomerous environment monitoring data fusion method | |
CN115248876B (en) | Remote sensing image overall recommendation method based on content understanding | |
CN114491122B (en) | Picture matching method for similar image retrieval | |
CN109472282B (en) | Depth image hashing method based on few training samples | |
CN112749253A (en) | Multi-text abstract generation method based on text relation graph | |
CN112541083A (en) | Text classification method based on active learning hybrid neural network | |
CN112732932A (en) | User entity group recommendation method based on knowledge graph embedding | |
CN114092283A (en) | Knowledge graph matching-based legal case similarity calculation method and system | |
CN112035689A (en) | Zero sample image hash retrieval method based on vision-to-semantic network | |
CN110110120B (en) | Image retrieval method and device based on deep learning | |
CN113792810A (en) | Multi-attention recommendation method based on collaborative filtering and deep learning | |
CN112084319B (en) | Relational network video question-answering system and method based on actions | |
Prasomphan | Toward Fine-grained Image Retrieval with Adaptive Deep Learning for Cultural Heritage Image. | |
CN116524301A (en) | 3D point cloud scene instance shape searching and positioning method based on contrast learning | |
CN116523041A (en) | Knowledge graph construction method, retrieval method and system for equipment field and electronic equipment | |
CN115797795A (en) | Remote sensing image question-answering type retrieval system and method based on reinforcement learning | |
CN113254468B (en) | Equipment fault query and reasoning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |