CN110110116A - A kind of trademark image retrieval method for integrating depth convolutional network and semantic analysis - Google Patents

A kind of trademark image retrieval method for integrating depth convolutional network and semantic analysis Download PDF

Info

Publication number
CN110110116A
CN110110116A CN201910259374.8A CN201910259374A CN110110116A CN 110110116 A CN110110116 A CN 110110116A CN 201910259374 A CN201910259374 A CN 201910259374A CN 110110116 A CN110110116 A CN 110110116A
Authority
CN
China
Prior art keywords
trademark image
similarity
trade mark
picture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910259374.8A
Other languages
Chinese (zh)
Other versions
CN110110116B (en
Inventor
高楠
祝建明
李利娟
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910259374.8A priority Critical patent/CN110110116B/en
Publication of CN110110116A publication Critical patent/CN110110116A/en
Application granted granted Critical
Publication of CN110110116B publication Critical patent/CN110110116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Library & Information Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

Integrate the trademark image retrieval method of depth convolutional neural networks and semantic analysis, comprising: step 1. pre-processes picture;Step 2. trains depth convolutional neural networks model;Picture is inputted trained model and carries out images match by step 3.;Step 4. calculates the similarity of two crucial phrases;Step 5. calculates the similarity of two concepts;Step 6. carries out decision based on the Feature Fusion Algorithm of bayesian theory;Step 7. judges the distance between two images feature vector size using Euclidean distance;Step 8. calculates the similarity between trademark image;Step 9. constructs trademark image retrieval tree.Influence of this reduction subjective factor for retrieval effectiveness solves the problems, such as image retrieval information inaccuracy, realizes efficient accurate trademark image retrieval effect.

Description

A kind of trademark image retrieval method for integrating depth convolutional network and semantic analysis
Technical field
The present invention relates to deep learnings and image retrieval.It proposes and carries out trade mark using the method for depth convolutional neural networks The retrieval of image, and crucial phrase is used in combination and carries out semantic matches.
Background technique
Trade mark is the mark of commodity or service, is the symbol of goodwill and reliability, and has become fierce city Indispensable weapon in the competitive activities of field.New trade mark must have enough uniquenesses to obscure to avoid with registered trade mark Or conflict.Image retrieval is carried out based on computer vision technique, and using the correlation computers supplementary knowledge such as pattern-recognition, To solve the problems, such as that current trade mark registration provides a good approach.However, that there are speed is slow for this method, by image complexity The disadvantages of influence of degree.Moreover, conventional method is influenced sternly abstract image and more complicated image by artificial subjective factor Weight.Especially for those pure figurative marks and the incomplete trademark image of explanation, traditional trade-mark searching method is used Carry out not only difficult but also inefficient, the trade mark registration application requirement being also not suitable under the conditions of China's rapid economic development.China at present Registered trademark number increases year by year, the manual allocation of traditional trade-mark searching method itself is subjective, specific classification is difficult to define, Trademark image similitude is difficult to the problems such as describing and becomes increasingly conspicuous, and seriously restricts the development in China trade mark registration field, therefore grind It is not only extremely important but also extremely urgent to study carefully an automatic and efficient trade mark retrieval technique out.Herein in such a background Under carried out research work.
Summary of the invention
The present invention will overcome the disadvantages mentioned above of the prior art, propose a kind of to integrate depth convolutional neural networks and semantic analysis Trademark image retrieval method.The retrieval for carrying out trademark image using the method for depth convolutional neural networks is proposed, is improved The success rate of trade mark retrieval and a large amount of labours for eliminating Feature Engineering.On this basis, the semanteme of crucial phrase is carried out Matching considers similarity semantically when retrieval, trademark image retrieval can be improved while guaranteeing accuracy Time improves performance, and solves the problems, such as " semantic gap " to a certain extent.
The trademark image retrieval method for integrating depth convolutional neural networks and semantic analysis of the invention, including following step It is rapid:
First is most of, extracts trademark image feature with the method for deep learning, and calculate similarity, mainly includes step 1-3。
Step 1. pre-processes picture.
The trade mark picture that the needs of user's input detect is read, the trade mark position in picture is detected, detects the figure in trade mark Picture, word segment carry out alignment operation to trademark image, operation finally are normalized to trademark image size, further beats Packet is the file format of lmdb, is layed foundation to carry out deep learning;
Step 2. trains depth convolutional neural networks model.
Build the depth convolutional neural networks model of 10 layers of structure in total.Wherein, first layer is input layer, and input is by pre- The trademark image of processing.What is be connected with input layer is 5 layers in total of convolutional layer, includes an excitation function, that is, motivate Layer, excitation function select ReLU, it is therefore an objective to introduce the non-linear effect of data.3 layers are full articulamentum later.1st, 2,5 volume Lamination includes a pond layer, and down-sampled mode selects maxpooling, it is therefore an objective to reduce data dimension.The last layer, Output layer will be exported to the characteristic information of trademark image, as trained depth convolutional network model.
It after building completion, ready trade mark picture associated documents will import before, and configure relevant prototxt text Part determines model structure and training parameter, obtains trained model.The last layer of model is extracted, as trademark image Feature database;
Picture is inputted trained model and carries out images match by step 3..
Test picture is inputted into depth convolutional neural networks, extracts the feature vector of Target Photo, and utilize feature pin Amount carries out similarity calculation;
Second is most of, extracts trademark image semantic feature, calculates similarity, mainly includes step 4-5.
Step 4. calculates the similarity of two crucial phrases.
Assuming that we have been prepared for completing two trade mark picture I that the pretreatment of two width is completed1, I2, to current two picture into Row picture segmentation, each cut zone extract a corresponding keyword, a crucial phrase are consequently formed.By traditional semanteme The method of analysis is distributed, it is possible thereby to borrow the similarity calculating method of keyword degree to the semantic feature of trademark image into Row analysis.
The similarity for calculating two crucial phrases, its calculation formula is:
W1And W2For two words, image I is referred specifically in this step1And I2Corresponding crucial phrase.{S11, S12... ... S1n, and { S21, S22... ... S2n, it is its concept set respectively, the tool of the crucial phrase of two images is referred specifically in this step Body surface is existing, S1nIt is the n senses of a dictionary entry that word 1 possesses, pointed by n-th of cut zone for referring specifically to the 1st picture in this step Keyword.Sim(W1,W2) be two words each senses of a dictionary entry (concept) similarity maximum value, be represented as two in this step Open the similarity of the semantic level of trade mark picture;
Step 5. calculates the similarity of two keywords.
Each pass between two crucial phrases is simplified to problem by the similarity problem of trademark image by previous step The similarity of keyword, this step will discuss this problem.
The similarity for calculating two keywords, its calculation formula is:
S1It is the n senses of a dictionary entry that word possesses, refers specifically to key pointed by some cut zone of picture in this step Word.Wherein βi(1≤i≤4) are that adjustable parameter respectively indicates 4 features: the description of the first basic meaning original, other basic meanings are former Description, the former description of relationship justice, relational symbol description, and meet: β1234=1, β1≥β2≥β3≥β4
Third is most of, is that the image similarity obtained for two parts before is analyzed and inquired into, and main includes step Rapid 6-9.
The similarity of step 6. fusion trademark image.
Two never similarities between the trademark image of Tongfang surface analysis have been obtained by upper two major parts.
This step will use the Feature Fusion Algorithm based on bayesian theory to carry out decision, and two similarities are melted It closes.Its process can indicate are as follows:
x→ωj
Wherein Ω={ ω1,…,ωcRepresent model space Ω include c kind mode, x=[x1,x2,…,xN] it is unknown sample This x is by N-dimensional real-valued.P(ωk| x) indicate the posterior probability of kth class, k ∈ { 1,2 ..., c }.According to the shellfish of minimal error rate This decision theory of leaf, if sample is divided into jth class, such is exactly the maximum mode of posterior probability under the conditions of known sample x Class;
Step 7. judges the distance between two images feature vector size using Euclidean distance.
The similarity measurement for defining a trademark image needs after obtaining input trade mark feature vector and generic Calculate the similarity of feature vector in the feature vector and said features library of input trade mark.Judge whether similar main between image It is by judging the distance between two images feature vector size.Its calculation formula is as follows:
Wherein, m represents the dimension of feature vector, and d represents the distance between two images feature vector, xiRepresent first I-th of value in the feature vector of picture, corresponding yiFor the feature vector respective value of the second picture.
Step 8. calculates the similarity between trademark image.
The similarity value of every trade mark and input trade mark in library is obtained, and returns to the high trade mark of similarity.Its calculation formula It is as follows:
Wherein, d is the similarity between two pictures obtained in step 7.
Step 9. constructs trademark image retrieval tree.
In conjunction with the similarity between trademark image, each trademark image is corresponded on trademark image retrieval tree, to letter Change entire search process, establishes quickly retrieval system.Meanwhile it is incomplete for trademark image, trademark image is fuzzy and is based on The optimization of the trademark image retrieval result of user feedback plays the role of positive.
The present invention provides a kind of trademark image retrieval methods for integrating depth convolutional neural networks and semantic analysis, are related to Deep learning and image retrieval.The retrieval for carrying out trademark image using the method for depth convolutional neural networks is proposed, is improved The success rate of image retrieval and a large amount of labours for eliminating Feature Engineering.On this basis, the semanteme of crucial phrase is carried out Matching considers similarity semantically when retrieval, the time of image retrieval can be improved while guaranteeing accuracy, Performance is improved, and solves the problems, such as " semantic gap " to a certain extent.
The invention has the advantages that reducing influence of the subjective factor for retrieval effectiveness, image retrieval information inaccuracy is solved The problem of, realize efficient accurate trademark image retrieval effect.
Detailed description of the invention
Fig. 1 is techniqueflow schematic diagram of the invention.
Fig. 2 is convolutional network illustraton of model.
Specific embodiment
Process in order to facilitate the understanding of the present invention carries out specific introduction below in conjunction with the flow chart of Fig. 1:
First is most of, extracts trademark image feature with the method for deep learning, and calculate similarity, mainly includes step 1-3。
Step 1. pre-processes picture.
The trade mark picture that the needs of user's input detect is read, the trade mark position in picture is detected, detects the figure in trade mark Picture, word segment carry out alignment operation to trademark image, operation finally are normalized to trademark image size, further beats Packet is the file format of lmdb, is layed foundation to carry out deep learning;
Step 2. trains depth convolutional neural networks model.
Build the depth convolutional neural networks model of 10 layers of structure in total.Wherein, first layer is input layer, and input is by pre- The trademark image of processing.What is be connected with input layer is 5 layers in total of convolutional layer, includes an excitation function, that is, motivate Layer, excitation function select ReLU, it is therefore an objective to introduce the non-linear effect of data.3 layers are full articulamentum later.1st, 2,5 volume Lamination includes a pond layer, and down-sampled mode selects maxpooling, it is therefore an objective to reduce data dimension.The last layer, Output layer will be exported to the characteristic information of trademark image, as trained depth convolutional network model.
It after building completion, ready trade mark picture associated documents will import before, and configure relevant prototxt text Part determines model structure and training parameter, obtains trained model.The last layer of model is extracted, as trademark image Feature database;
Picture is inputted trained model and carries out images match by step 3..
Test picture is inputted into depth convolutional neural networks, extracts the feature vector of Target Photo, and utilize feature pin Amount carries out similarity calculation;
Second is most of, extracts trademark image semantic feature, calculates similarity, mainly includes step 4-5.
Step 4. calculates the similarity of two crucial phrases.
Assuming that we have been prepared for completing two trade mark picture I that the pretreatment of two width is completed1, I2, to current two picture into Row picture segmentation, each cut zone extract a corresponding keyword, a crucial phrase are consequently formed.By traditional semanteme The method of analysis is distributed, it is possible thereby to borrow the similarity calculating method of keyword degree to the semantic feature of trademark image into Row analysis.
The similarity for calculating two crucial phrases, its calculation formula is:
W1And W2For two words, image I is referred specifically in this step1And I2Corresponding crucial phrase.{S11, S12... ... S1n, and { S21, S22... ... S2n, it is its concept set respectively, the tool of the crucial phrase of two images is referred specifically in this step Body surface is existing, S1nIt is the n senses of a dictionary entry that word 1 possesses, pointed by n-th of cut zone for referring specifically to the 1st picture in this step Keyword.Sim(W1,W2) be two words each senses of a dictionary entry (concept) similarity maximum value, be represented as two in this step Open the similarity of the semantic level of trade mark picture;
Step 5. calculates the similarity of two keywords.
Each pass between two crucial phrases is simplified to problem by the similarity problem of trademark image by previous step The similarity of keyword, this step will discuss this problem.
The similarity for calculating two keywords, its calculation formula is:
S1It is the n senses of a dictionary entry that word possesses, refers specifically to key pointed by some cut zone of picture in this step Word.Wherein βi(1≤i≤4) are that adjustable parameter respectively indicates 4 features: the description of the first basic meaning original, other basic meanings are former Description, the former description of relationship justice, relational symbol description, and meet: β1234=1, β1≥β2≥β3≥β4
Third is most of, is that the image similarity obtained for two parts before is analyzed and inquired into mainly including step 6-9。
The similarity of step 6. fusion trademark image.
Two never similarities between the trademark image of Tongfang surface analysis have been obtained by upper two major parts.This step will Decision is carried out using the Feature Fusion Algorithm based on bayesian theory, two similarities are merged.Its process can indicate Are as follows:
x→ωj
Wherein Ω={ ω1,…,ωcRepresent model space Ω include c kind mode, x=[x1,x2,…,xN] it is unknown sample This x is by N-dimensional real-valued.P(ωk| x) indicate the posterior probability of kth class, k ∈ { 1,2 ..., c }.According to the shellfish of minimal error rate This decision theory of leaf, if sample is divided into jth class, such is exactly the maximum mode of posterior probability under the conditions of known sample x Class;
Step 7. judges the distance between two images feature vector size using Euclidean distance.
The similarity measurement for defining a trademark image needs after obtaining input trade mark feature vector and generic Calculate the similarity of feature vector in the feature vector and said features library of input trade mark.Judge whether similar main between image It is by judging the distance between two images feature vector size.Its calculation formula is as follows:
Wherein, m represents the dimension of feature vector, and d represents the distance between two images feature vector, xiRepresent first I-th of value in the feature vector of picture, corresponding yiFor the feature vector respective value of the second picture.
Step 8. calculates the similarity between trademark image.
The similarity value of every trade mark and input trade mark in library is obtained, and returns to the high trade mark of similarity.Its calculation formula It is as follows:
Wherein, d is the similarity between two pictures obtained in step 7.
Step 9. constructs trademark image retrieval tree.
In conjunction with the similarity between trademark image, each trademark image is corresponded on trademark image retrieval tree, to letter Change entire search process, establishes quickly retrieval system.Meanwhile it is incomplete for trademark image, trademark image is fuzzy and is based on The optimization of the trademark image retrieval result of user feedback plays the role of positive.
The present invention provides a kind of trademark image retrieval methods for integrating depth convolutional neural networks and semantic analysis, are related to Deep learning and image retrieval.The retrieval for carrying out trademark image using the method for depth convolutional neural networks is proposed, is improved The success rate of image retrieval and a large amount of labours for eliminating Feature Engineering.On this basis, the semanteme of crucial phrase is carried out Matching considers similarity semantically when retrieval, the time of image retrieval can be improved while guaranteeing accuracy, Performance is improved, and solves the problems, such as " semantic gap " to a certain extent.
The invention has the advantages that reducing influence of the subjective factor for retrieval effectiveness, image retrieval information inaccuracy is solved The problem of, realize efficient accurate trademark image retrieval effect.
Content described in this specification embodiment is only enumerating to the way of realization of inventive concept, protection of the invention Range should not be construed as being limited to the specific forms stated in the embodiments, and protection scope of the present invention is also and in art technology Personnel conceive according to the present invention it is conceivable that equivalent technologies mean.

Claims (1)

1. integrating the trademark image retrieval method of depth convolutional neural networks and semantic analysis, comprising the following steps:
Step 1. pre-processes picture;The trade mark picture that the needs of user's input detect is read, the trade mark position in picture, inspection are detected Image, the word segment in trade mark are surveyed, alignment operation is carried out to trademark image, behaviour finally is normalized to trademark image size Make, be further packaged as the file format of lmdb, lays foundation to carry out deep learning;
Step 2. trains depth convolutional neural networks model;Build the depth convolutional neural networks model of 10 layers of structure in total;Its In, first layer is input layer, and pretreated trademark image is passed through in input;What is be connected with input layer is 5 layers in total of convolutional layer, Comprising an excitation function, that is, excitation layer, excitation function selects ReLU, it is therefore an objective to introduce the non-linear effect of data;It 3 layers are full articulamentum afterwards;1st, 2,5 convolutional layer includes a pond layer, and down-sampled mode selects maxpooling, purpose It is to reduce data dimension;The last layer will also export the characteristic information of trademark image with regard to output layer, as trained depth volume Product network model;
After building completion, it ready trade mark picture associated documents will import before, and configure relevant prototxt file, really Determine model structure and training parameter, obtains trained model;The last layer of model is extracted, the feature as trademark image Library;
Picture is inputted trained model and carries out images match by step 3.;Test picture is inputted into depth convolutional neural networks, The feature vector of Target Photo is extracted, and carries out similarity calculation using feature sales volume;
Step 4. calculates the similarity of two crucial phrases, its calculation formula is:
W1And W2For two words, { S11, S12... ... S1n, and { S21, S22... ... S2n, it is its concept set, S respectively1nIt is word The n senses of a dictionary entry that language possesses;Sim(W1,W2) be two words each senses of a dictionary entry (concept) similarity maximum value, both be Similarity;
Step 5. calculates the similarity of two concepts, its calculation formula is:
Wherein βi(1≤i≤4) are that adjustable parameter respectively indicates 4 features: the description of the first basic meaning original, other basic meanings are former Description, the former description of relationship justice, relational symbol description, and meet: β1234=1, β1≥β2≥β3≥β4
Step 6. carries out decision based on the Feature Fusion Algorithm of bayesian theory, and process can indicate are as follows:
Wherein Ω={ ω1,…,ωcRepresent model space Ω include c kind mode, x=[x1,x2,…,xN] be unknown sample x by N-dimensional real-valued;P(ωk| x) indicate the posterior probability of kth class, k ∈ { 1,2 ..., c };According to the Bayes of minimal error rate Decision theory, if sample is divided into jth class, such is exactly the maximum mode class of posterior probability under the conditions of known sample x;
Step 7. judges the distance between two images feature vector size using Euclidean distance;Define the phase of a trademark image Like property measure, obtain input trade mark feature vector and generic after, need to calculate input trade mark feature vector with it is affiliated The similarity of feature vector in feature database;Judge between image it is whether similar mainly by judge two images feature vector it Between apart from size;Its calculation formula is as follows:
Wherein, m represents the dimension of feature vector;
Step 8. calculates the similarity between trademark image;The similarity value of every trade mark and input trade mark in library is obtained, and is returned Return the high trade mark of similarity;Its calculation formula is as follows:
Step 9. constructs trademark image retrieval tree;In conjunction with the similarity between trademark image, each trademark image is corresponded into quotient On logo image trie tree, to the entire search process of simplification, quickly retrieval system is established;Meanwhile it is incomplete for trademark image, Trademark image is fuzzy and the optimization of the trademark image retrieval result based on user feedback play the role of it is positive.
CN201910259374.8A 2019-04-02 2019-04-02 Trademark image retrieval method integrating deep convolutional network and semantic analysis Active CN110110116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910259374.8A CN110110116B (en) 2019-04-02 2019-04-02 Trademark image retrieval method integrating deep convolutional network and semantic analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910259374.8A CN110110116B (en) 2019-04-02 2019-04-02 Trademark image retrieval method integrating deep convolutional network and semantic analysis

Publications (2)

Publication Number Publication Date
CN110110116A true CN110110116A (en) 2019-08-09
CN110110116B CN110110116B (en) 2021-04-06

Family

ID=67484759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910259374.8A Active CN110110116B (en) 2019-04-02 2019-04-02 Trademark image retrieval method integrating deep convolutional network and semantic analysis

Country Status (1)

Country Link
CN (1) CN110110116B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765305A (en) * 2019-10-23 2020-02-07 深圳报业集团 Medium information pushing system and visual feature-based image-text retrieval method thereof
CN112507160A (en) * 2020-12-03 2021-03-16 平安科技(深圳)有限公司 Automatic judgment method and device for trademark infringement, electronic equipment and storage medium
CN113744831A (en) * 2021-08-20 2021-12-03 中国联合网络通信有限公司成都市分公司 Online medical application purchasing system
CN115100665A (en) * 2022-07-22 2022-09-23 贵州中烟工业有限责任公司 Approximate trademark screening method, model construction method and computer-readable storage medium
CN116244458A (en) * 2022-12-16 2023-06-09 北京理工大学 Method for generating training, generating sample pair, searching model training and trademark searching
CN116542818A (en) * 2023-07-06 2023-08-04 图林科技(深圳)有限公司 Trademark monitoring and analyzing method based on big data technology

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388022A (en) * 2008-08-12 2009-03-18 北京交通大学 Web portrait search method for fusing text semantic and vision content
CN101706964A (en) * 2009-08-27 2010-05-12 北京交通大学 Color constancy calculating method and system based on derivative structure of image
US20130022264A1 (en) * 2011-01-24 2013-01-24 Alon Atsmon System and process for automatically finding objects of a specific color
CN106228142A (en) * 2016-07-29 2016-12-14 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
CN108038122A (en) * 2017-11-03 2018-05-15 福建师范大学 A kind of method of trademark image retrieval
CN109408600A (en) * 2018-09-25 2019-03-01 浙江工业大学 A kind of books based on data mining recommend purchaser's method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388022A (en) * 2008-08-12 2009-03-18 北京交通大学 Web portrait search method for fusing text semantic and vision content
CN101706964A (en) * 2009-08-27 2010-05-12 北京交通大学 Color constancy calculating method and system based on derivative structure of image
US20130022264A1 (en) * 2011-01-24 2013-01-24 Alon Atsmon System and process for automatically finding objects of a specific color
CN106228142A (en) * 2016-07-29 2016-12-14 西安电子科技大学 Face verification method based on convolutional neural networks and Bayesian decision
CN108038122A (en) * 2017-11-03 2018-05-15 福建师范大学 A kind of method of trademark image retrieval
CN109408600A (en) * 2018-09-25 2019-03-01 浙江工业大学 A kind of books based on data mining recommend purchaser's method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邵妍: "基于多特征融合的商标图像检索", 《中国优秀硕士学位论文全文数据库·信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765305A (en) * 2019-10-23 2020-02-07 深圳报业集团 Medium information pushing system and visual feature-based image-text retrieval method thereof
CN112507160A (en) * 2020-12-03 2021-03-16 平安科技(深圳)有限公司 Automatic judgment method and device for trademark infringement, electronic equipment and storage medium
WO2022116418A1 (en) * 2020-12-03 2022-06-09 平安科技(深圳)有限公司 Method and apparatus for automatically determining trademark infringement, electronic device, and storage medium
CN113744831A (en) * 2021-08-20 2021-12-03 中国联合网络通信有限公司成都市分公司 Online medical application purchasing system
CN115100665A (en) * 2022-07-22 2022-09-23 贵州中烟工业有限责任公司 Approximate trademark screening method, model construction method and computer-readable storage medium
CN116244458A (en) * 2022-12-16 2023-06-09 北京理工大学 Method for generating training, generating sample pair, searching model training and trademark searching
CN116244458B (en) * 2022-12-16 2023-08-25 北京理工大学 Method for generating training, generating sample pair, searching model training and trademark searching
CN116542818A (en) * 2023-07-06 2023-08-04 图林科技(深圳)有限公司 Trademark monitoring and analyzing method based on big data technology

Also Published As

Publication number Publication date
CN110110116B (en) 2021-04-06

Similar Documents

Publication Publication Date Title
CN111476294B (en) Zero sample image identification method and system based on generation countermeasure network
CN110110116A (en) A kind of trademark image retrieval method for integrating depth convolutional network and semantic analysis
CN111241837B (en) Theft case legal document named entity identification method based on anti-migration learning
CN110633409B (en) Automobile news event extraction method integrating rules and deep learning
CN104391942B (en) Short essay eigen extended method based on semantic collection of illustrative plates
CN111931506B (en) Entity relationship extraction method based on graph information enhancement
CN106257455B (en) A kind of Bootstrapping method extracting viewpoint evaluation object based on dependence template
CN106407333A (en) Artificial intelligence-based spoken language query identification method and apparatus
CN106909655A (en) Found and link method based on the knowledge mapping entity that production alias is excavated
CN110807084A (en) Attention mechanism-based patent term relationship extraction method for Bi-LSTM and keyword strategy
CN108255813A (en) A kind of text matching technique based on term frequency-inverse document and CRF
CN108509521B (en) Image retrieval method for automatically generating text index
CN111144119B (en) Entity identification method for improving knowledge migration
CN105389326A (en) Image annotation method based on weak matching probability canonical correlation model
CN106055560A (en) Method for collecting data of word segmentation dictionary based on statistical machine learning method
CN111222330B (en) Chinese event detection method and system
CN114818717A (en) Chinese named entity recognition method and system fusing vocabulary and syntax information
CN112580330A (en) Vietnamese news event detection method based on Chinese trigger word guidance
Atef et al. AQAD: 17,000+ arabic questions for machine comprehension of text
CN110245234A (en) A kind of multi-source data sample correlating method based on ontology and semantic similarity
CN109344233A (en) A kind of Chinese personal name recognition method
CN107818078B (en) Semantic association and matching method for Chinese natural language dialogue
CN113641788B (en) Unsupervised long and short film evaluation fine granularity viewpoint mining method
CN114417885A (en) Network table column type detection method based on probability graph model
Li et al. Attention-based LSTM-CNNs for uncertainty identification on Chinese social media texts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant