CN105117728B - The extracting method and extraction element of characteristics of image - Google Patents

The extracting method and extraction element of characteristics of image Download PDF

Info

Publication number
CN105117728B
CN105117728B CN201510489413.5A CN201510489413A CN105117728B CN 105117728 B CN105117728 B CN 105117728B CN 201510489413 A CN201510489413 A CN 201510489413A CN 105117728 B CN105117728 B CN 105117728B
Authority
CN
China
Prior art keywords
local
image
preset
coordinate system
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510489413.5A
Other languages
Chinese (zh)
Other versions
CN105117728A (en
Inventor
沈琳琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huafu Technology Co ltd
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201510489413.5A priority Critical patent/CN105117728B/en
Publication of CN105117728A publication Critical patent/CN105117728A/en
Application granted granted Critical
Publication of CN105117728B publication Critical patent/CN105117728B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of extracting methods of characteristics of image, comprising the following steps: obtains the first partial image block of image, and determines the central point and sampled point in the first partial image block;According to the central point and sampled point, local coordinate system is established;According to the local coordinate system, the local invariant feature at the sampled point is extracted;It is merged the local invariant feature at the sampled point to obtain characteristics of image.The invention also discloses a kind of extraction elements.The present invention can carry out the estimation and subsequent direction normalization operation of principal direction to avoid portion's image block of playing a game, so as to accurately obtain the local invariant feature for representing the image.

Description

Image feature extraction method and device
Technical Field
The invention relates to the technical field of computer vision, in particular to an image feature extraction method and an image feature extraction device.
Background
Scale-invariant feature transform (SIFT), which is mainly implemented by estimating a main direction of a local image block of an image, then performing normalization and histogram statistics on a gradient direction in the local image block according to the main direction, and finally adopting the histogram as a feature representation of the local image block. However, once the main direction is not estimated accurately, the subsequent histograms have large errors and thus do not represent the local invariant features of the image well.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide an image feature extraction method and an image feature extraction device, aiming at avoiding the main direction estimation and the subsequent direction normalization operation of a local image block by constructing a local coordinate system of the local image block of an image, so as to accurately obtain the local invariant feature representing the image.
In order to achieve the above object, the present invention provides an image feature extraction method, including:
acquiring a first local image block of an image, and determining a central point and a sampling point in the first local image block;
establishing a local coordinate system according to the central point and the sampling point;
extracting local invariant features at the sampling points according to the local coordinate system;
and fusing the local invariant features at the sampling points to obtain image features.
Preferably, the step of extracting the local invariant features at the sampling points according to the local coordinate system comprises:
setting a preset frequency and a preset direction at the sampling point according to the local coordinate system to obtain wavelets with a preset quantity;
acquiring a second local image block which takes the sampling point as the center in the first local image block;
and performing inner product calculation on the wavelets and the second local image blocks of the preset number to obtain responses of the preset number, wherein the responses represent local invariant features at the sampling points.
Preferably, the step of fusing the local invariant features at the sampling points to obtain image features includes:
obtaining a phase of each of the responses;
and encoding each frequency according to the phase by using a preset bit, and calculating to obtain a histogram with a preset length so as to obtain the image characteristics.
Preferably, the step of encoding each frequency according to the phase by a predetermined bit and calculating a histogram with a predetermined length further comprises:
acquiring a histogram of another image, and calculating a distance value between the histograms of the two images;
and matching the local invariant features of the two images according to the distance value.
Preferably, the step of matching the local invariant feature according to the distance value includes:
comparing the distance value to a predetermined value;
and if the distance value is close to the preset value, judging that the two images are similar.
In addition, to achieve the above object, the present invention also proposes an extraction device including:
the system comprises an acquisition module, a sampling module and a processing module, wherein the acquisition module is used for acquiring a first local image block of an image and determining a central point and a sampling point in the first local image block;
the establishing module is used for establishing a local coordinate system according to the central point and the sampling point;
the extraction module is used for extracting local invariant features at the sampling points according to the local coordinate system;
and the fusion processing module is used for fusing the local invariant features at the sampling points to obtain image features.
Preferably, the extraction module comprises:
the setting unit is used for setting wavelets with preset frequency and preset direction at the sampling points according to the local coordinate system to obtain wavelets with preset quantity;
a first acquisition unit configured to acquire a second partial image block centered on the sample point in the first partial image block;
and the first calculation unit is used for carrying out inner product calculation on the wavelets and the second local image blocks of the preset number to obtain the response of the preset number, and the response represents the local invariant feature at the sampling point.
Preferably, the fusion processing module includes:
a second acquisition unit configured to acquire a phase of each of the responses;
and the second calculation unit is used for encoding each frequency by a preset bit according to the phase and calculating to obtain a histogram with a preset length so as to obtain the image characteristics.
Preferably, the extraction device further comprises:
the calculation module is used for acquiring the histogram of the other image and calculating the distance value between the histograms of the two images;
and the matching module is used for matching the local invariant features of the two images according to the distance value.
Preferably, the matching module comprises:
a comparison unit for comparing the distance value with a predetermined value;
and the judging unit is used for judging that the two images are similar if the magnitude of the distance value is close to the preset value.
According to the extraction method and the extraction device of the image features, the first local image block of the image is obtained, the central point and the sampling point in the first local image block are determined, then the local coordinate system is established according to the central point and the sampling point, and the local invariant features at the sampling point are extracted according to the local coordinate system, so that the local invariant features at the sampling point are fused to obtain the image features. Therefore, by constructing a rotatable and invariant local coordinate system for the local image block of the image, the estimation of the main direction and the subsequent direction normalization operation of the local image block can be avoided, and the local invariant feature representing the image can be accurately obtained.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of an image feature extraction method according to the present invention;
FIG. 2 is a schematic diagram of a local coordinate system according to the present invention;
FIG. 3 is a schematic diagram of a refinement process for extracting local invariant features at the sampling points according to the local coordinate system in the step of FIG. 1;
FIG. 4 is a schematic view of a refining process of fusing local invariant features at the sampling points to obtain image features in the step of FIG. 1;
FIG. 5 is a flowchart illustrating a first embodiment of an image feature extraction method according to the present invention;
FIG. 6 is a schematic diagram of a detailed process of the step in FIG. 5 for matching the local invariant features according to the distance values;
FIG. 7 is a functional block diagram of the first embodiment of the extracting apparatus of the present invention;
FIG. 8 is a schematic diagram of a refinement function module of the extraction module of FIG. 7;
FIG. 9 is a schematic diagram of a refinement function module of the fusion processing module of FIG. 7;
FIG. 10 is a functional block diagram of a second embodiment of the extracting apparatus of the present invention;
fig. 11 is a schematic diagram of a detailed functional module of the matching module in fig. 10.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides an image feature extraction method, and referring to fig. 1, in an embodiment, the image feature extraction method includes the following steps:
step S10, acquiring a first local image block of an image, and determining a central point and a sampling point in the first local image block;
step S20, establishing a local coordinate system according to the central point and the sampling point;
in this embodiment, due to the influences of factors such as rotation, shooting angle, distance and the like, the same object may look completely different in different images, and therefore, the local invariant feature of the image is an important basis for understanding the image and classifying and identifying the object in the image, and is an important basis for understanding subsequent images. Therefore, the invention provides an image feature extraction method based on a local coordinate system, so that when the rotation and the shooting angle are changed, the local invariant feature representing the image can be accurately extracted.
In this embodiment, a small local area of the image, that is, the first local image block, is extracted from the image, and assuming that the central point of the first local image block is P and the sampling points of the extracted features around the first local image block are Pi, a local coordinate system as shown in fig. 2 can be established, where the coordinate system takes a connection line between the central point P and the sampling points Pi as a y-axis, a vertical direction as an x-axis, and an included angle between the x-axis and a horizontal direction as θ. Due to the arrangement of the horizontal direction theta, the angle correction function can be realized, so that the local coordinate system can still keep rotating unchanged when being influenced by factors such as rotation, shooting angle change and the like.
In the preferred embodiment, a Gabor filter is used to set parameters and extract local invariant features of the image. In image processing, the Gabor function is a linear filter for edge extraction, and the frequency and direction expression of the Gabor filter is similar to the human visual system, and thus is suitable for texture expression and separation. In the spatial domain, a two-dimensional Gabor filter is a gaussian kernel function modulated by a sinusoidal plane wave, and specifically, filters in different forms such as Log-Gabor can be adopted, and the filters can be specifically selected according to actual needs.
Step S30, extracting local invariant features at the sampling points according to the local coordinate system;
and step S40, fusing the local invariant features at the sampling points to obtain image features.
In this embodiment, a predetermined number of wavelets can be obtained by predetermining wavelets in frequency and predetermined direction according to the local coordinate system, for example, 40 Gabor wavelets in 8 directions and 5 frequencies can be designed, and of course, in other embodiments, the frequency and the direction in the local coordinate system can also be set reasonably according to the specific situation of the image, and the present invention is not limited to this embodiment. And then performing convolution or inner product calculation on the wavelets of the preset number and the second local image block taking the sampling point as the center to obtain the responses of the preset number such as 40. The impulse response of a Gabor filter can be defined as a sine wave (a sinusoidal plane wave for a two-dimensional Gabor filter) multiplied by a gaussian function. Due to the multiplicative convolution property, the fourier transform of the impulse response of a Gabor filter is the convolution of the fourier transform of its harmonic function and the fourier transform of a gaussian function. The filter consists of a real part and an imaginary part, which are mutually orthogonal. And then, according to the response obtained at the sampling point Pi, sequentially carrying out two-bit or four-bit coding and histogram calculation, obtaining a histogram through the two-bit or four-bit coding calculation, and finally, taking the histogram as the representation of the local invariant feature, thereby extracting the complete local invariant feature of the image.
The method for extracting the image features comprises the steps of obtaining a first local image block of an image, determining a central point and a sampling point in the first local image block, establishing a local coordinate system according to the central point and the sampling point, and extracting local invariant features at the sampling point according to the local coordinate system so as to fuse the local invariant features at the sampling point to obtain the image features. Therefore, by constructing a rotatable and invariant local coordinate system for the local image block of the image, the estimation of the main direction and the subsequent direction normalization operation of the local image block can be avoided, and the local invariant feature representing the image can be accurately obtained.
Further, as shown in fig. 3, on the basis of the embodiment of fig. 1, in this embodiment, the step S30 includes:
step S301, according to the local coordinate system, setting a preset frequency and a preset direction at the sampling point to obtain wavelets with preset quantity;
in this embodiment, the local Gabor features at the sampling points Pi are extracted from the local coordinate system, and 40 Gabor wavelets with 5 frequencies and 8 directions are designedu=0~4,v=0~7;
Wherein,(xi,yi) Is the coordinate of sampling point Pi, f is frequencyThe ratio, v is the direction, σ is the standard deviation of the gaussian function, and j is the complex number. It will of course be appreciated that these parameter values may be based onThe actual needs are reasonably set.
Step S302, acquiring a second local image block which takes the sampling point as the center in the first local image block;
step S303, performing inner product calculation on the predetermined number of wavelets and the second local image block to obtain the predetermined number of responses, where the responses represent local invariant features at the sampling points.
In this example, 40 small waves were usedAnd the second local image block F (x, y) taking the sampling point Pi as the center is subjected to inner product operation to obtain 40 responses
In this embodiment, each response represents a local invariant feature of each wavelet, and the responses representing each wavelet need to be fused to calculate a histogram representing image features.
In an embodiment, as shown in fig. 4, on the basis of the embodiment of fig. 1, in this embodiment, the step S40 includes:
step S401, obtaining the phase of each response;
step S402, encoding each frequency according to the phase by a predetermined bit, and calculating to obtain a histogram with a predetermined length, so as to obtain the image feature.
In the present embodiment, the response obtained at the sampling point PiIs a complex number, calculated for each soundPhase of responseAnd generating a binary code according to the following formula
The above two-bit code is 8-bit coded according to each frequency u, and 10 integers of 0 to 255 can be obtained:
and after extracting the feature points with unchanged rotation of the n sampling points Pi with the central point P according to the steps, fusing the feature points to form a histogram representing the unchanged features of the whole image. For each frequency, can be based onAndtwo length 255 histograms corresponding to a binary code obtained by calculation
It is understood that the above-mentioned encoding method can also be modified, such as using 4-bit encoding or encoding according to real and imaginary values.
In the prior art, the subsequent gradient direction histogram only records the direction information of the image gray level change and does not contain important information such as the intensity and frequency of the gray level change, so that the characteristic discrimination is poor, and the local texture representation with stronger discrimination and description capacity can be obtained by adopting Gabor characteristics, so that different image information can be better distinguished.
In an embodiment, as shown in fig. 5, on the basis of the embodiment of fig. 1, in this embodiment, after the step S402, the method further includes:
step S50, acquiring the histogram of another image, and calculating the distance value between the histograms of the two images;
in this embodiment, a histogram of another image is obtained, where the another image may be the same as, similar to, or completely unrelated to the selected image, and the similarity between the two images can be determined by calculating a distance value between the histograms of the two images. Thus, the applicability is wider.
And step S60, matching the local invariant features of the two images according to the distance value.
In this embodiment, the matching of the local invariant features is performed according to the distance value, and a specific calculation formula is as follows:
wherein,andthe 255-dimensional feature histograms extracted from the two images are respectively.
In an embodiment, as shown in fig. 6, on the basis of the embodiment of fig. 5, in this embodiment, the step S60 includes:
step S601, comparing the distance value with a preset value;
in this embodiment, the distance value may be compared with a predetermined value, where the predetermined value may be reasonably set according to actual needs, and in this preferred embodiment, the predetermined value may be preferably 0.
Step S602, if the distance value is close to the predetermined value, it is determined that the two images are similar.
In this embodiment, if it is determined that the magnitude of the distance value is close to the predetermined value, for example, 0, it may be determined that the two images are similar. Further, when the distance value between the histograms of the two images is smaller, if the value range is within 0-1, the similarity of the two images is higher; conversely, when the distance value between the histograms of the two images is larger, if the value range is larger than 1, the similarity between the two images is lower.
The invention also provides an extraction device 1, with reference to fig. 7, in an embodiment, the extraction device 1 comprises:
the system comprises an acquisition module 10, a processing module and a processing module, wherein the acquisition module is used for acquiring a first local image block of an image and determining a central point and a sampling point in the first local image block;
the establishing module 20 is used for establishing a local coordinate system according to the central point and the sampling point;
in this embodiment, due to the influences of factors such as rotation, shooting angle, distance and the like, the same object may look completely different in different images, and therefore, the local invariant feature of the image is an important basis for understanding the image and classifying and identifying the object in the image, and is an important basis for understanding subsequent images. Therefore, the invention provides an image feature extraction method based on a local coordinate system, so that when the rotation and the shooting angle are changed, the local invariant feature representing the image can be accurately extracted.
In this embodiment, a small local area of the image, that is, the first local image block, is extracted from the image, and assuming that the central point of the first local image block is P and the sampling points of the extracted features around the first local image block are Pi, a local coordinate system as shown in fig. 2 can be established, where the coordinate system takes a connection line between the central point P and the sampling points Pi as a y-axis, a vertical direction as an x-axis, and an included angle between the x-axis and a horizontal direction as θ. Due to the arrangement of the horizontal direction theta, the angle correction function can be realized, so that the local coordinate system can still keep rotating unchanged when being influenced by factors such as rotation, shooting angle change and the like.
In the preferred embodiment, a Gabor filter is used to set parameters and extract local invariant features of the image. In image processing, the Gabor function is a linear filter for edge extraction, and the frequency and direction expression of the Gabor filter is similar to the human visual system, and thus is suitable for texture expression and separation. In the spatial domain, a two-dimensional Gabor filter is a gaussian kernel function modulated by a sinusoidal plane wave, and specifically, filters in different forms such as Log-Gabor can be adopted, and the filters can be specifically selected according to actual needs.
An extraction module 30, configured to extract a local invariant feature at the sampling point according to the local coordinate system;
and the fusion processing module 40 is configured to fuse the local invariant features at the sampling points to obtain image features.
In this embodiment, a predetermined number of wavelets can be obtained by predetermining wavelets in frequency and predetermined direction according to the local coordinate system, for example, 40 Gabor wavelets in 8 directions and 5 frequencies can be designed, and of course, in other embodiments, the frequency and the direction in the local coordinate system can also be set reasonably according to the specific situation of the image, and the present invention is not limited to this embodiment. And then performing convolution or inner product calculation on the wavelets of the preset number and the second local image block taking the sampling point as the center to obtain the responses of the preset number such as 40. The impulse response of a Gabor filter can be defined as a sine wave (a sinusoidal plane wave for a two-dimensional Gabor filter) multiplied by a gaussian function. Due to the multiplicative convolution property, the fourier transform of the impulse response of a Gabor filter is the convolution of the fourier transform of its harmonic function and the fourier transform of a gaussian function. The filter consists of a real part and an imaginary part, which are mutually orthogonal. And then, according to the response obtained at the sampling point Pi, sequentially carrying out two-bit or four-bit coding and histogram calculation, obtaining a histogram through the two-bit or four-bit coding calculation, and finally, taking the histogram as the representation of the local invariant feature, thereby extracting the complete local invariant feature of the image.
The extraction device provided by the invention obtains the first local image block of the image, determines the central point and the sampling point in the first local image block, then establishes a local coordinate system according to the central point and the sampling point, and extracts the local invariant features at the sampling point according to the local coordinate system so as to fuse the local invariant features at the sampling point to obtain the image features. Therefore, by constructing a rotatable and invariant local coordinate system for the local image block of the image, the estimation of the main direction and the subsequent direction normalization operation of the local image block can be avoided, and the local invariant feature representing the image can be accurately obtained.
Further, as shown in fig. 8, on the basis of the embodiment of fig. 7, in this embodiment, the extracting module 30 includes:
a setting unit 301, configured to set a predetermined frequency and a predetermined direction at the sampling point according to the local coordinate system, so as to obtain a predetermined number of wavelets;
in this embodiment, the local Gabor features at the sampling points Pi are extracted from the local coordinate system, and 40 Gabor wavelets with 5 frequencies and 8 directions are designedu=0~4,v=0~7;
Wherein,(xi,yi) Is the coordinate of sampling point Pi, f is frequencyThe ratio, v is the direction, σ is the standard deviation of the gaussian function, and j is the complex number. It will of course be appreciated that these parameter values may be based onThe actual needs are reasonably set.
A first acquisition unit 302 configured to acquire a second partial image block centered on the sample point in the first partial image block;
a first calculating unit 303, configured to perform inner product calculation on the predetermined number of wavelets and the second local image block to obtain a response of the predetermined number, where the response represents a local invariant feature at the sampling point.
In this example, 40 small waves were usedAnd the second local image block F (x, y) taking the sampling point Pi as the center is subjected to inner product operation to obtain 40 responses
In this embodiment, each response represents a local invariant feature of each wavelet, and the responses representing each wavelet need to be fused to calculate a histogram representing image features.
In an embodiment, as shown in fig. 9, on the basis of the above embodiment of fig. 7, in this embodiment, the fusion processing module 40 includes:
a second acquisition unit 401 for acquiring a phase of each of the responses;
a second calculating unit 402, configured to perform encoding of a predetermined bit for each frequency according to the phase, and calculate a histogram with a predetermined length to obtain the image feature.
In the present embodiment, at the sampling pointPi derived responseIs a complex number, calculated by calculating the phase of each responseAnd generating a binary code according to the following formula
The above two-bit code is 8-bit coded according to each frequency u, and 10 integers of 0 to 255 can be obtained:
and after extracting the feature points with unchanged rotation of the n sampling points Pi with the central point P according to the steps, fusing the feature points to form a histogram representing the unchanged features of the whole image. For each frequency, can be based onAndtwo length 255 histograms corresponding to a binary code obtained by calculation
It is understood that the above-mentioned encoding method can also be modified, such as using 4-bit encoding or encoding according to real and imaginary values.
In the prior art, the subsequent gradient direction histogram only records the direction information of the image gray level change and does not contain important information such as the intensity and frequency of the gray level change, so that the characteristic discrimination is poor, and the local texture representation with stronger discrimination and description capacity can be obtained by adopting Gabor characteristics, so that different image information can be better distinguished.
In an embodiment, as shown in fig. 10, on the basis of the above embodiment of fig. 7, in this embodiment, the extraction device 1 further includes:
a calculation module 50, configured to obtain a histogram of another image and calculate a distance value between the histograms of the two images;
in this embodiment, a histogram of another image is obtained, where the another image may be the same as, similar to, or completely unrelated to the selected image, and the similarity between the two images can be determined by calculating a distance value between the histograms of the two images. Thus, the applicability is wider.
And a matching module 60, configured to match the local invariant features of the two images according to the distance value.
In this embodiment, the matching of the local invariant features is performed according to the distance value, and a specific calculation formula is as follows:
wherein,andthe 255-dimensional feature histograms extracted from the two images are respectively.
In an embodiment, as shown in fig. 11, on the basis of the above embodiment of fig. 1, in this embodiment, the matching module 60 includes:
a comparing unit 601 for comparing the distance value with a predetermined value;
in this embodiment, the distance value may be compared with a predetermined value, where the predetermined value may be reasonably set according to actual needs, and in this preferred embodiment, the predetermined value may be preferably 0.
A determining unit 602, configured to determine that the two images are similar if the magnitude of the distance value is close to the predetermined value.
In this embodiment, if it is determined that the magnitude of the distance value is close to the predetermined value, for example, 0, it may be determined that the two images are similar. Further, when the distance value between the histograms of the two images is smaller, if the value range is within 0-1, the similarity of the two images is higher; conversely, when the distance value between the histograms of the two images is larger, if the value range is larger than 1, the similarity between the two images is lower.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. An image feature extraction method is characterized by comprising the following steps:
acquiring a first local image block of an image, and determining a central point and a sampling point in the first local image block;
establishing a local coordinate system according to the central point and the sampling point;
extracting local invariant features at the sampling points according to the local coordinate system;
fusing the local invariant features at the sampling points to obtain image features; the step of extracting the local invariant features at the sampling points according to the local coordinate system comprises:
setting a preset frequency and a preset direction at the sampling point according to the local coordinate system to obtain wavelets with a preset quantity;
acquiring a second local image block which takes the sampling point as the center in the first local image block;
and performing inner product calculation on the wavelets and the second local image blocks of the preset number to obtain responses of the preset number, wherein the responses represent local invariant features at the sampling points.
2. The method for extracting image features according to claim 1, wherein the step of fusing the local invariant features at the sampling points to obtain the image features comprises:
obtaining a phase of each of the responses;
and encoding each frequency according to the phase by using a preset bit, and calculating to obtain a histogram with a preset length so as to obtain the image characteristics.
3. The method of claim 2, wherein the step of encoding each frequency according to the phase by a predetermined bit and calculating a histogram of a predetermined length further comprises:
acquiring a histogram of another image, and calculating a distance value between the histograms of the two images;
and matching the local invariant features of the two images according to the distance value.
4. The method for extracting image features according to claim 3, wherein the step of performing matching of the local invariant features according to the distance values comprises:
comparing the distance value to a predetermined value;
and if the distance value is close to the preset value, judging that the two images are similar.
5. An extraction device, characterized in that it comprises:
the system comprises an acquisition module, a sampling module and a processing module, wherein the acquisition module is used for acquiring a first local image block of an image and determining a central point and a sampling point in the first local image block;
the establishing module is used for establishing a local coordinate system according to the central point and the sampling point;
the extraction module is used for extracting local invariant features at the sampling points according to the local coordinate system;
the fusion processing module is used for fusing the local invariant features at the sampling points to obtain image features;
the extraction module comprises:
the setting unit is used for setting a preset frequency and a preset direction at the sampling point according to the local coordinate system to obtain wavelets with preset quantity;
a first acquisition unit configured to acquire a second partial image block centered on the sample point in the first partial image block;
and the first calculation unit is used for carrying out inner product calculation on the wavelets and the second local image blocks of the preset number to obtain the response of the preset number, and the response represents the local invariant feature at the sampling point.
6. The extraction device according to claim 5, wherein the fusion processing module includes:
a second acquisition unit configured to acquire a phase of each of the responses;
and the second calculation unit is used for encoding each frequency by a preset bit according to the phase and calculating to obtain a histogram with a preset length so as to obtain the image characteristics.
7. The extraction device as recited in claim 6, further comprising:
the calculation module is used for acquiring the histogram of the other image and calculating the distance value between the histograms of the two images;
and the matching module is used for matching the local invariant features of the two images according to the distance value.
8. The extraction device of claim 7, wherein the matching module comprises:
a comparison unit for comparing the distance value with a predetermined value;
and the judging unit is used for judging that the two images are similar if the magnitude of the distance value is close to the preset value.
CN201510489413.5A 2015-08-11 2015-08-11 The extracting method and extraction element of characteristics of image Active CN105117728B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510489413.5A CN105117728B (en) 2015-08-11 2015-08-11 The extracting method and extraction element of characteristics of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510489413.5A CN105117728B (en) 2015-08-11 2015-08-11 The extracting method and extraction element of characteristics of image

Publications (2)

Publication Number Publication Date
CN105117728A CN105117728A (en) 2015-12-02
CN105117728B true CN105117728B (en) 2019-01-25

Family

ID=54665711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510489413.5A Active CN105117728B (en) 2015-08-11 2015-08-11 The extracting method and extraction element of characteristics of image

Country Status (1)

Country Link
CN (1) CN105117728B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084134A (en) * 2019-04-03 2019-08-02 东华大学 A kind of face attendance checking system based on cascade neural network and Fusion Features

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488223A (en) * 2008-01-16 2009-07-22 中国科学院自动化研究所 Image curve characteristic matching method based on average value standard deviation descriptor
CN103295014A (en) * 2013-05-21 2013-09-11 上海交通大学 Image local feature description method based on pixel location arrangement column diagrams

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9324003B2 (en) * 2009-09-14 2016-04-26 Trimble Navigation Limited Location of image capture device and object features in a captured image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488223A (en) * 2008-01-16 2009-07-22 中国科学院自动化研究所 Image curve characteristic matching method based on average value standard deviation descriptor
CN103295014A (en) * 2013-05-21 2013-09-11 上海交通大学 Image local feature description method based on pixel location arrangement column diagrams

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于局部特征的图像匹配算法研究;侯晓丽;《中国优秀硕士学位论文全文数据库 信息科技辑》;20141115;正文第三章

Also Published As

Publication number Publication date
CN105117728A (en) 2015-12-02

Similar Documents

Publication Publication Date Title
CN105474234B (en) A kind of vena metacarpea knows method for distinguishing and vena metacarpea identification device
US8831355B2 (en) Scale robust feature-based identifiers for image identification
CN105654098B (en) Hyperspectral remote sensing image classification method and system
CN104134200B (en) Mobile scene image splicing method based on improved weighted fusion
KR101506060B1 (en) Feature-based signatures for image identification
US10769407B2 (en) Fingerprint registration method and device
CN105550625B (en) A kind of living body iris detection method and terminal
CN104794440B (en) A kind of false fingerprint detection method based on the multiple dimensioned LBP of more piecemeals
CN108550165A (en) A kind of image matching method based on local invariant feature
CN110765992A (en) Seal identification method, medium, equipment and device
CN113723309A (en) Identity recognition method, identity recognition device, equipment and storage medium
CN112801031A (en) Vein image recognition method and device, electronic equipment and readable storage medium
CN103077528A (en) Rapid image matching method based on DCCD (Digital Current Coupling)-Laplace and SIFT (Scale Invariant Feature Transform) descriptors
CN105678720A (en) Image matching judging method and image matching judging device for panoramic stitching
CN104268550A (en) Feature extraction method and device
CN105117728B (en) The extracting method and extraction element of characteristics of image
CN109993176A (en) Image local feature describes method, apparatus, equipment and medium
Temel et al. ReSIFT: Reliability-weighted sift-based image quality assessment
CN109815791B (en) Blood vessel-based identity recognition method and device
CN111325216B (en) Image local feature description method and device, computer equipment and storage medium
Youssef et al. Color image edge detection method based on multiscale product using Gaussian function
Tiwari et al. Meandering energy potential to locate singular point of fingerprint
CN104951761B (en) information processing method and electronic equipment
Zhao et al. Saliency guided gradient similarity for fast perceptual blur assessment
Joshi et al. Investigating the impact of thresholding and thinning methods on the performance of partial fingerprint identification systems: a review

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220802

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee after: SHENZHEN HUAFU INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 518000 Shenzhen University, 3688 Nanhai Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN University

TR01 Transfer of patent right
CP01 Change in the name or title of a patent holder

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee after: Shenzhen Huafu Technology Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: SHENZHEN HUAFU INFORMATION TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder