CN111914920A - Sparse coding-based similarity image retrieval method and system - Google Patents

Sparse coding-based similarity image retrieval method and system Download PDF

Info

Publication number
CN111914920A
CN111914920A CN202010724862.4A CN202010724862A CN111914920A CN 111914920 A CN111914920 A CN 111914920A CN 202010724862 A CN202010724862 A CN 202010724862A CN 111914920 A CN111914920 A CN 111914920A
Authority
CN
China
Prior art keywords
image
sparse
similarity
characterization
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010724862.4A
Other languages
Chinese (zh)
Inventor
华臻
王浩然
李小玲
吴昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Technology and Business University
Original Assignee
Shandong Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Technology and Business University filed Critical Shandong Technology and Business University
Priority to CN202010724862.4A priority Critical patent/CN111914920A/en
Publication of CN111914920A publication Critical patent/CN111914920A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a similarity image retrieval method and system based on sparse coding, and relates to the field of image recognition. A similarity image retrieval method based on sparse coding comprises the following steps: performing base vector characterization according to the reference image to obtain a first sparse characterization result; performing base vector characterization according to the images in the image library to obtain a second sparse characterization result; calculating the similarity of the first sparse representation result and the second sparse representation result; and judging whether the similarity is greater than a preset threshold value, if so, judging the image to be a similar image, and if not, judging the image to be a non-similar image. The method can more fully extract the image characteristic information, and is more accurate and targeted in the similarity calculation process. In addition, the invention also provides a similarity image retrieval system based on sparse coding, which comprises: the device comprises a first acquisition module, a second acquisition module, a calculation module and a judgment module.

Description

Sparse coding-based similarity image retrieval method and system
Technical Field
The invention relates to the field of image recognition, in particular to a similarity image retrieval method and system based on sparse coding.
Background
With the advent of the digital media age, massive digital images have become an indispensable part of our lives, and have very wide application in many fields such as life sciences, education, culture and the like. Many classical machine learning methods, especially deep learning methods, can retrieve a target image from a vast image library. How to search out images with similar semantics from a massive image library by using a single image has very good practical application value.
However, the conventional machine learning method cannot be very effectively applied to the retrieval of a single similar image. The following problems exist with today's mainstream methods:
1. the feature extraction method is not very efficient and highly dependent on training samples;
2. there is no targeted measure in the similarity calculation process.
Disclosure of Invention
The invention aims to provide a similarity image retrieval method based on sparse coding, which can more fully extract image characteristic information and is more accurate and targeted in the similarity calculation process.
Another object of the present invention is to provide a similarity image retrieval system based on sparse coding, which is capable of operating a similarity image retrieval method based on sparse coding.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a similarity image retrieval method based on sparse coding, which includes the following steps of performing base vector characterization according to a reference image to obtain a first sparse characterization result, performing base vector characterization according to an image in an image library to obtain a second sparse characterization result, calculating a similarity between the first sparse characterization result and the second sparse characterization result, and determining whether the similarity is greater than a preset threshold, if so, determining that the first sparse characterization result is a similarity image, and if not, determining that the first sparse characterization result is a non-similarity image.
In some embodiments of the present invention, before the obtaining of the first sparse representation result by performing basis vector representation according to the reference image, training a basis vector by using a representative image in an image library is further included.
In some embodiments of the present invention, the training of the basis vectors using the representative images in the image library includes fixing the basis vectors in a dictionary and adjusting the encoding coefficients so that the objective function is minimized.
In some embodiments of the present invention, the training of the basis vectors using the representative images in the image library includes fixing the encoding coefficients and adjusting the basis vectors in the dictionary to minimize the objective function.
In some embodiments of the present invention, the training of the basis vectors using the representative images in the image library includes obtaining a set of basis vectors that well represent the sample images by iterating until convergence.
In some embodiments of the present invention, before the obtaining the first sparse representation result by performing the basis vector representation according to the reference image, the method further includes optimizing the basis vector according to a cross-validation.
In some embodiments of the present invention, before the obtaining of the first sparse representation result by performing the radix vector representation according to the reference image, an optimized representation of an image with inaccurate representation is further included.
In some embodiments of the present invention, before the obtaining of the first sparse representation result by performing the radix vector representation according to the reference image, a saliency detection is performed on the image.
In a second aspect, an embodiment of the present application provides a similarity image retrieval system based on sparse coding, which includes a first obtaining module configured to perform base vector characterization according to a reference image to obtain a first sparse characterization result, a second obtaining module configured to perform base vector characterization according to an image in an image library to obtain a second sparse characterization result, a calculating module configured to calculate a similarity between the first sparse characterization result and the second sparse characterization result, and a determining module configured to determine whether the similarity is greater than a preset threshold, determine that the similarity image is a similarity image if the similarity is greater than the preset threshold, and determine that the similarity image is a non-similarity image if the similarity is not greater than the preset threshold.
In some embodiments of the invention, the above further comprises at least one memory for storing computer instructions, at least one processor in communication with the memory, wherein when the computer instructions are executed by the at least one processor, the at least one processor causes the system to perform: the device comprises a first acquisition module, a second acquisition module, a calculation module and a judgment module.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects:
the method can more fully extract image feature information, is more accurate and targeted in the similarity calculation process, constructs a sparse feature migration model by discovering different features and structures which are not changed among related fields and fully utilizing the existing resources, migrates knowledge in other related tasks to a target task with sparse labeled samples, and helps to improve the learning efficiency of the target task. The migration sparse coding image retrieval technology combines sparse coding with migration learning technology, and is a new technology for realizing the reutilization of sparse features in different fields.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram illustrating a step of a similarity image retrieval method based on sparse coding according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating detailed steps of a similarity image retrieval method based on sparse coding according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a similarity image retrieval system based on sparse coding according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the individual features of the embodiments can be combined with one another without conflict.
Example 1
Referring to fig. 1, fig. 1 is a schematic diagram illustrating steps of a similarity image retrieval method based on sparse coding according to an embodiment of the present invention, which includes the following steps:
step S100, performing base vector representation according to a reference image to obtain a first sparse representation result;
specifically, the reference image is the selected image to be retrieved, the basis vector is the most basic component in the vector space, because the basis vectors are necessarily linearly independent, each vector in the vector space can be represented by the basis vector, and the dimension of the image is reduced to obtain the first sparse representation result.
In some embodiments, the electronic device takes a picture in various scenes, such as a night scene or a backlit environment. Under the same shooting scene, the electronic equipment can shoot a plurality of frames of images and carry out image registration on the plurality of frames of images. After the image registration, the electronic device may perform a fusion process on the plurality of frames of images. However, in the related art, the imaging quality of the image obtained by fusing by the electronic device is poor. For example, the electronic device may first acquire a plurality of frames of images to be processed. The electronic device may then determine a reference image from the plurality of frames of images to be processed. For example, the electronic device may determine the image to be processed with the highest definition as the reference image. Then, the images of the plurality of frames of images to be processed except the reference image are non-reference images.
Step S110, performing base vector characterization according to the images in the image library to obtain a second sparse characterization result;
specifically, a basis vector characterization is performed on each image in the image library, so that a second sparse characterization result is obtained.
In some embodiments, there may be A, B, C, D and 4 images in the image library, the base vector characterization is performed on A, B, C, D and 4 images, respectively, and the second sparse characterization result of the image a, the third sparse characterization result of the image B, the fourth sparse characterization result of the image C, and the fifth sparse characterization result of the image D may be obtained by performing dimension reduction on the images.
Step S120, calculating the similarity of the first sparse representation result and the second sparse representation result;
specifically, the similarity between the first sparse representation result and the second sparse representation result is calculated by utilizing the multi-weight Euclidean distance.
In some embodiments, the euclidean distance may be calculated by using a multi-weight euclidean distance, which is a similarity measurement method, and the distance between two vectors may be calculated, and the value range is from 0 to positive infinity, and if the distance between two vectors is smaller, the vectors are certainly more similar. Since the importance of different regions in an image is different, the common euclidean distance cannot accurately calculate the similarity of different images. In addition to processing data, a weighted euclidean distance may be used, and different weights may be used according to different region significances, so that the similarity between the first sparse representation result and the second sparse representation result is calculated through a multi-weighted euclidean distance calculation, the similarity between the first sparse representation result and the remaining one or more sparse representation results may also be calculated respectively, for example, the similarity between the first sparse representation result and the third sparse representation result is calculated through a multi-weighted euclidean distance calculation, the similarity between the first sparse representation result and the fourth sparse representation result is calculated through a multi-weighted euclidean distance calculation, and the similarity between the first sparse representation result and the fifth sparse representation result is calculated through a multi-weighted euclidean distance calculation.
Step S130, judging whether the similarity is greater than a preset threshold value;
specifically, it is determined whether the similarity is greater than a preset threshold, and if the similarity is greater than the preset threshold, the process proceeds to step S140, and if the similarity is less than or equal to the preset threshold, the process proceeds to step S150.
In some embodiments, the preset threshold may be changed according to different pictures, for example, assuming that a picture has n pixels, where n1 pixels have a gray value smaller than the threshold and n2 pixels have a gray value greater than or equal to the threshold (n1+ n2 ═ n). w1 and w2 represent the specific gravity of each of the two pixels, the mean and variance of all pixels with gray values less than the threshold are μ 1 and σ 1, respectively, and the mean and variance of all pixels with gray values greater than or equal to the threshold are μ 2 and σ 2, respectively, and the histogram of the image is calculated, and the histogram is taken by a series of small-to-large threshold values and substituted into the BBS formula. The value that results in the "minimum intra-class difference" or the "maximum inter-class difference" is the final threshold, which may be, for example, 0.4, 0.5, 0.6, etc.
Step S140 of determining that the image is a similar image;
specifically, the retrieved pictures are determined to be similar images.
In some embodiments, the similarity between the first sparse representation result and the fourth sparse representation result is the highest, so that the reference image and the C image in the image library are determined to be similar images.
In step S150, it is determined as a non-similar image.
Example 2
Referring to fig. 2, fig. 2 is a schematic diagram illustrating detailed steps of a similarity image retrieval method based on sparse coding according to an embodiment of the present invention, which includes the following steps:
s200, fixing base vectors in the dictionary, and adjusting coding coefficients to minimize a target function;
specifically, the basis vectors in the dictionary are unchanged, and the coding coefficients are changed, so that the target function is minimum, and the dictionary training is completed.
Step S210, fixing the coding coefficient, and adjusting the basis vector in the dictionary to minimize the target function;
specifically, the encoding coefficient is kept unchanged, and the basis vector in the dictionary is changed, so that the target function is minimized, and the dictionary training is completed.
Step S220, obtaining a group of basis vectors of the well expressed sample image through continuous iteration until convergence;
specifically, a set of basis vectors that well represent the sample image can be obtained through multiple iterations until the model converges.
Step S230, optimizing the basis vectors according to cross mutual inspection;
in some embodiments, multiple sets of basis vectors are obtained using multiple sets of experiments based on different image libraries, and we retain only the set of core basis vectors that have the highest similarity to the other sets of basis vectors.
S240, performing optimized representation on the image with inaccurate representation;
in some embodiments, fidelity constrained optimized characterization is performed on images that are characterized inaccurately. And subtracting the mean value from each image block, expressing the texture of the image by a dictionary, directly taking the mean value of low resolution as the mean value of a high resolution image in the reconstruction stage, and continuously optimizing by using a gradient descent method after encoding and reconstructing.
Step S250, carrying out significance detection on the image;
in some embodiments, the brain, after obtaining the image, can quickly think about finding the interesting parts and ignoring the other parts, which mechanism enables the person to quickly react to the scene being viewed. With the development of computer technology, the recognition and processing of images by using computers is an important development trend. The essence of significance detection is feature extraction.
For example, for an input image of 300 × 400, 30FPS may be reached. Changes are made on some local modules, one aspect is that a global configuration module (GGM) is proposed, which aims at providing position information of some salient objects for different scale features, and a Feature Aggregation Module (FAM) is also proposed for fusing different scale feature maps. The two modules are put together, so that the network can obtain different reception fields, and the detection performance of the salient object is improved. In addition, an edge detection branch is added for synchronous training, and the edge detection branch can also improve the detection performance of the salient object.
Step S260, performing base vector representation according to the reference image to obtain a first sparse representation result;
specifically, the reference image is the selected image to be retrieved, the basis vector is the most basic component in the vector space, because the basis vectors are necessarily linearly independent, each vector in the vector space can be represented by the basis vector, and the dimension of the image is reduced to obtain the first sparse representation result. Reference may be made to the description of step S100, which is not repeated here.
Step S270, performing base vector characterization according to the images in the image library to obtain a second sparse characterization result;
specifically, a basis vector characterization is performed on each image in the image library, so that a second sparse characterization result is obtained. Reference may be made to the description of step S110, which is not repeated here.
Step S280, calculating the similarity of the first sparse representation result and the second sparse representation result;
specifically, the similarity between the first sparse representation result and the second sparse representation result is calculated by utilizing the multi-weight Euclidean distance. Reference may be made to the description of step S120, which is not repeated here.
Step S290, judging whether the similarity is greater than a preset threshold value;
specifically, it is determined whether the similarity is greater than a preset threshold, and if the similarity is greater than the preset threshold, the process proceeds to step S3000, and if the similarity is less than or equal to the preset threshold, the process proceeds to step S310.
Step S300, judging the image is a similarity image;
specifically, the retrieved pictures are determined to be similar images.
In some embodiments, the similarity between the first sparse representation result and the third sparse representation result is the highest, so that the reference image and the B image in the image library are determined to be similar images.
In step S310, if not, it is determined as a non-similar image.
Example 3
Referring to fig. 3, fig. 3 is a schematic diagram of a similarity image retrieval system based on sparse coding according to an embodiment of the present invention. A similarity image retrieval system based on sparse coding comprises a first obtaining module, a second obtaining module, a calculating module and a judging module, wherein the first obtaining module is used for performing base vector representation according to a reference image to obtain a first sparse representation result, the second obtaining module is used for performing base vector representation according to images in an image library to obtain a second sparse representation result, the calculating module is used for calculating the similarity between the first sparse representation result and the second sparse representation result, and the judging module is used for judging whether the similarity is greater than a preset threshold value, if yes, the similarity image is judged, and if not, the non-similarity image is judged.
Also included are a memory, a processor, and a communication interface, which are electrically connected, directly or indirectly, to each other to enable transmission or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by executing the software programs and modules stored in the memory. The communication interface may be used for communicating signaling or data with other node devices.
The Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor may be an integrated circuit chip having signal processing capabilities. The Processor 102 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, the similarity image retrieval method and system based on sparse coding provided by the embodiment of the application can extract image feature information more fully, are more accurate and targeted in the similarity calculation process, and build a sparse feature migration model by discovering features and structures which are different but not changed among related fields and fully utilizing existing resources, so that knowledge in other related tasks is migrated to a target task with a sparse labeled sample, and the learning efficiency of the target task is improved. The migration sparse coding image retrieval technology combines sparse coding with migration learning technology, and is a new technology for realizing the reutilization of sparse features in different fields.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A similarity image retrieval method based on sparse coding is characterized by comprising the following steps:
performing base vector characterization according to the reference image to obtain a first sparse characterization result;
performing base vector characterization according to the images in the image library to obtain a second sparse characterization result;
calculating the similarity of the first sparse representation result and the second sparse representation result;
and judging whether the similarity is greater than a preset threshold value, if so, judging the image to be a similar image, and if not, judging the image to be a non-similar image.
2. The sparse coding-based similarity image retrieval method as claimed in claim 1, wherein before the performing base vector characterization on the basis of the reference image to obtain the first sparse characterization result, further comprises:
the basis vectors are trained using representative images in the image library.
3. The sparse coding-based similarity image retrieval method as claimed in claim 2, wherein the training of the basis vectors by using the representative images in the image library comprises:
and fixing the base vectors in the dictionary, and adjusting the coding coefficients to minimize the target function.
4. The sparse coding-based similarity image retrieval method as claimed in claim 2, wherein the training of the basis vectors by using the representative images in the image library comprises:
and fixing the encoding coefficient, and adjusting the base vector in the dictionary to minimize the target function.
5. The sparse coding-based similarity image retrieval method as claimed in claim 2, wherein the training of the basis vectors by using the representative images in the image library comprises:
and obtaining a group of base vectors of the well-expressed sample image by continuously iterating until convergence.
6. The sparse coding-based similarity image retrieval method as claimed in claim 1, wherein before the performing base vector characterization on the basis of the reference image to obtain the first sparse characterization result, further comprises:
and optimizing the basis vectors according to cross mutual experiments.
7. The sparse coding-based similarity image retrieval method as claimed in claim 1, wherein before the performing base vector characterization on the basis of the reference image to obtain the first sparse characterization result, further comprises:
and performing optimized characterization on the images with inaccurate characterization.
8. The sparse coding-based similarity image retrieval method as claimed in claim 1, wherein before the performing base vector characterization on the basis of the reference image to obtain the first sparse characterization result, further comprises:
and carrying out significance detection on the image.
9. A sparse coding-based similarity image retrieval system, comprising:
the first acquisition module is used for performing base vector representation according to the reference image to acquire a first sparse representation result;
the second acquisition module is used for performing base vector characterization according to the images in the image library to obtain a second sparse characterization result;
the calculation module is used for calculating the similarity between the first sparse representation result and the second sparse representation result;
and the judging module is used for judging whether the similarity is greater than a preset threshold value, if so, judging the image to be a similar image, and if not, judging the image to be a non-similar image.
10. The sparse coding-based similarity image retrieval system of claim 9, further comprising:
at least one memory for storing computer instructions;
at least one processor in communication with the memory, wherein the at least one processor, when executing the computer instructions, causes the system to perform: the device comprises a first acquisition module, a second acquisition module, a calculation module and a judgment module.
CN202010724862.4A 2020-07-24 2020-07-24 Sparse coding-based similarity image retrieval method and system Pending CN111914920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010724862.4A CN111914920A (en) 2020-07-24 2020-07-24 Sparse coding-based similarity image retrieval method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010724862.4A CN111914920A (en) 2020-07-24 2020-07-24 Sparse coding-based similarity image retrieval method and system

Publications (1)

Publication Number Publication Date
CN111914920A true CN111914920A (en) 2020-11-10

Family

ID=73280813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010724862.4A Pending CN111914920A (en) 2020-07-24 2020-07-24 Sparse coding-based similarity image retrieval method and system

Country Status (1)

Country Link
CN (1) CN111914920A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114825636A (en) * 2022-05-26 2022-07-29 深圳博浩远科技有限公司 Health state monitoring and warning system and method for photovoltaic inverter

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020265A (en) * 2012-12-25 2013-04-03 深圳先进技术研究院 Image retrieval method and system
CN104142978A (en) * 2014-07-14 2014-11-12 重庆邮电大学 Image retrieval system and image retrieval method based on multi-feature and sparse representation
CN106991426A (en) * 2016-09-23 2017-07-28 天津大学 Remote sensing images sparse coding dictionary learning method based on DSP embedded
CN110930301A (en) * 2019-12-09 2020-03-27 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020265A (en) * 2012-12-25 2013-04-03 深圳先进技术研究院 Image retrieval method and system
CN104142978A (en) * 2014-07-14 2014-11-12 重庆邮电大学 Image retrieval system and image retrieval method based on multi-feature and sparse representation
CN106991426A (en) * 2016-09-23 2017-07-28 天津大学 Remote sensing images sparse coding dictionary learning method based on DSP embedded
CN110930301A (en) * 2019-12-09 2020-03-27 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴潇.: "基于形状的商标图像检索技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 05, pages 138 - 1254 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114825636A (en) * 2022-05-26 2022-07-29 深圳博浩远科技有限公司 Health state monitoring and warning system and method for photovoltaic inverter

Similar Documents

Publication Publication Date Title
CN108229419B (en) Method and apparatus for clustering images
US20240177462A1 (en) Few-shot object detection method
CN112052868A (en) Model training method, image similarity measuring method, terminal and storage medium
CN114238692A (en) Network live broadcast-oriented video big data accurate retrieval method and system
CN114861842B (en) Few-sample target detection method and device and electronic equipment
CN111179270A (en) Image co-segmentation method and device based on attention mechanism
CN113435499A (en) Label classification method and device, electronic equipment and storage medium
Makarov et al. Sparse depth map interpolation using deep convolutional neural networks
CN114565768A (en) Image segmentation method and device
CN114821823A (en) Image processing, training of human face anti-counterfeiting model and living body detection method and device
CN111914920A (en) Sparse coding-based similarity image retrieval method and system
CN116109907B (en) Target detection method, target detection device, electronic equipment and storage medium
CN112365513A (en) Model training method and device
CN116645513A (en) Watermark extraction method, model training method, device, electronic equipment and medium
CN111414921A (en) Sample image processing method and device, electronic equipment and computer storage medium
CN114155388B (en) Image recognition method and device, computer equipment and storage medium
CN116486153A (en) Image classification method, device, equipment and storage medium
CN116188815A (en) Video similarity detection method, system, storage medium and electronic equipment
CN114820558A (en) Automobile part detection method and device, electronic equipment and computer readable medium
CN116127083A (en) Content recommendation method, device, equipment and storage medium
CN113742525A (en) Self-supervision video hash learning method, system, electronic equipment and storage medium
CN110991543B (en) Image region of interest clustering method and device, computing device and storage medium
CN110751197A (en) Picture classification method, picture model training method and equipment
CN113239943B (en) Three-dimensional component extraction and combination method and device based on component semantic graph
CN116912518B (en) Image multi-scale feature processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination