CN111091133A - Bronze object golden text image identification method based on sift algorithm - Google Patents

Bronze object golden text image identification method based on sift algorithm Download PDF

Info

Publication number
CN111091133A
CN111091133A CN201911069702.4A CN201911069702A CN111091133A CN 111091133 A CN111091133 A CN 111091133A CN 201911069702 A CN201911069702 A CN 201911069702A CN 111091133 A CN111091133 A CN 111091133A
Authority
CN
China
Prior art keywords
image
matching
sift
golden
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911069702.4A
Other languages
Chinese (zh)
Other versions
CN111091133B (en
Inventor
王慧琴
王可
赵若晴
商立丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Qinghechuang Intelligent Technology Co ltd
Original Assignee
Xian University of Architecture and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Architecture and Technology filed Critical Xian University of Architecture and Technology
Priority to CN201911069702.4A priority Critical patent/CN111091133B/en
Publication of CN111091133A publication Critical patent/CN111091133A/en
Application granted granted Critical
Publication of CN111091133B publication Critical patent/CN111091133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a bronze ware golden text image identification method based on sift algorithm, which comprises the following steps of firstly, collecting bronze ware golden text rubbing image data by using an image segmentation method, and establishing a golden text data set; then, detecting sift characteristic points of the bronze ware golden text rubbing image by using an improved sift characteristic extraction algorithm, and describing the characteristic points; and finally, carrying out feature point matching on the rubbing image feature points obtained in the step two by using a matching method of vector included angle cosine values, and obtaining an image matching result. Due to the adoption of the improved sift feature extraction algorithm, the operation complexity is reduced through dimensionality reduction, the real-time performance is greatly improved, the matching accuracy and the time complexity are obviously superior to those of the traditional method, and the method is more suitable for bronze image recognition and matching. The accuracy of the golden image matching value is improved, the time complexity is reduced, and the golden image can be effectively identified.

Description

Bronze object golden text image identification method based on sift algorithm
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to a bronze object golden text image recognition method based on a sift algorithm.
Background
The research on bronze wares belongs to an important content of ancient literary science. The characters can be understood deeply only by studying the habit and the evolution process of the zigzag characteristics, the expressions, the sentences and the grammar of the bronze inscriptions in each history stage according to the scientific research means of the ancient literary science. In short, interpretation of bronze inscriptions requires researchers to have a broad knowledge base and training, a very challenging task.
The sift algorithm adopts an image local feature description operator which is based on a scale space and keeps invariance to image scaling, rotation and even affine, and is generally applied to the field of image processing. With the development of computer vision, the registration method based on image feature points is the mainstream direction and development trend of the current image matching technology. Therefore, many algorithms are proposed for feature point extraction at home and abroad. The SURF algorithm proposed by Herbert Bay et al in 2006, the BRSIK algorithm proposed by stefanleutengger et al in 2011, the 0RB algorithm proposed by Ethan Ruble et al, and the FREAK algorithm proposed by alexandreaihi et al, all of which are superior to the sift algorithm in time complexity, but the sift algorithm is still widely applied because the accuracy of the algorithm is superior to other algorithms in a general situation. Some researchers glance sideways in China proposed many feature point detection algorithms, Yangxiang proposed a feature detection algorithm based on USN, and Wang Li et al invented a multi-scale Harris feature detection algorithm based on image blocking, and these new methods are lower in time consumption than the original sift algorithm, but lower in accuracy than the original sift algorithm.
For these reasons, it is one of the subjects of the applicant to study bronze image recognition in order to find an algorithm that reduces the time complexity while ensuring accuracy.
Disclosure of Invention
Aiming at the defects or shortcomings of the sift algorithm, the invention aims to provide a bronze ware golden text image identification method based on an improved sift algorithm. To improve search efficiency and matching accuracy.
In order to realize the task, the invention adopts the following technical solution:
a bronze object golden text image recognition method based on an improved sift algorithm is characterized by comprising the following steps:
acquiring bronze ware golden text rubbing image data by using an image segmentation method, and establishing a golden text data set;
step two, detecting sift characteristic points of the bronze ware golden text rubbing image by using an improved sift characteristic extraction algorithm, and describing the characteristic points;
and step three, performing feature point matching on the rubbing image feature points obtained in the step two by using a matching method of vector included angle cosine values, and obtaining an image matching result.
According to the image segmentation method in the first step, automatic threshold binarization processing is carried out on the bronze ware golden text rubbing image, all rows and columns with black pixel points larger than a certain set threshold in the row are searched, image segmentation processing is carried out, and bronze ware golden text rubbing image data are acquired.
Further, the step of performing identification matching by using the improved sift feature extraction algorithm in the step two is as follows:
a) firstly, taking a feature point as a center, extracting a concentric ring region with the radius of 8 as a feature point neighborhood range, taking two pixels as units, sequentially decreasing the radius, dividing the feature point neighborhood into concentric circles with 4 units, and enabling each grid to represent a pixel point;
the center of the concentric circle is taken as a characteristic point, and the characteristic point is expressed as M (p)1,p2) The diameter is 16 at the maximum, and the circular area can be expressed as:
(x-p1)2+(x-p2)2=r2(1)
only the pixel position of the pixel in the same ring is changed after the image is rotated, and other relative characteristics of the pixel are basically kept unchanged, so that the rotation invariance is good;
b) calculating the modulus and the direction of the gradient of each pixel, counting 12 gradient direction accumulated values in each ring by using a gradient histogram to form 1 seed point, wherein each seed point is rich in vector information of 12 directions and generates a total 4 multiplied by 12-48-dimensional feature vector;
c) in order to avoid the sudden change of the feature descriptors caused by small displacement of the feature point positioning, the feature descriptors need to be sequenced in a circular left-shift mode, the maximum value of the innermost circular ring is shifted to the position of the first pixel point in the left direction, other concentric circles also rotate in sequence, and thus the sequencing value does not change after the image is rotated by any angle;
d) finally, the vector is normalized to further reduce the influence of the illumination change, where M ' is a feature point descriptor and M ' is (M '1,m'2,...,m'48) Then, the normalization formula is:
Figure BDA0002260567310000031
e) when the similarity degree between two vectors is measured, a similarity function can be adopted, and the smaller the function value is, the larger the vector difference is, and the smaller the similarity degree is;
measuring the similarity of the vectors by utilizing the cosine values of the included angles of the vectors, wherein the larger the cosine value is, the smaller the included angle between the two vectors is, and the higher the similarity between the vectors is; deriving the cosine value between the two vectors by using the Euclidean dot product and the magnitude formula as shown in the following formula:
A·B=||A||||B||cosθ (3)。
according to the bronze ware golden text image recognition method based on the sift algorithm, due to the adoption of the improved sift feature extraction algorithm, the operation complexity is reduced through dimensionality reduction, the time efficiency is higher than that of the traditional algorithm, the real-time performance is greatly improved, the method is obviously superior to the traditional method in the matching accuracy and time complexity, and the method is more suitable for bronze ware golden text image recognition and matching.
By constructing the feature descriptors of the circular partitions, the dimension of the feature vectors is reduced, and finally, a new sift feature descriptor is constructed, so that the precision of the matching value of the golden image is improved, the time complexity is reduced, and the golden image can be effectively identified.
Drawings
FIG. 1 is a flow chart of an improved sift image matching algorithm;
FIG. 2 is an improved feature descriptor, wherein (a) the graph is a partition of an improved keypoint neighborhood; (b) the graph is the descriptor gradient direction;
FIG. 3 is a graph comparing two sets of image experiment results with the conventional sift algorithm, wherein a graph and c graph are the matching results of the conventional sift algorithm, and b graph and d graph are the matching results of the improved sift algorithm.
The invention is described in further detail below with reference to the figures and examples.
Detailed Description
Researches find that bronze object golden text images often have a lot of noise points, and with the long-term evolution of inscription fonts, more than twenty varieties of variants are averagely generated in each word, the form of each word is not fixed, and a feature extraction method with abstract mapping capability is needed. The applicant has therefore proposed an improved sift matching algorithm.
The embodiment provides a bronze object golden text image identification method based on sift algorithm, which specifically comprises the following steps:
1) collecting bronze ware golden text rubbing image data by using an image segmentation method, and establishing a golden text data set;
2) detecting the sift characteristic points of the bronze ware golden text rubbing image by using an improved sift algorithm, and describing the characteristic points;
3) and (3) carrying out feature point matching on the rubbing image feature points obtained in the step (2) by using a matching method of vector included angle cosine values, and obtaining an image matching result.
In the step 1), the image segmentation method comprises the following steps: and carrying out automatic threshold binarization processing on the image, searching all rows and columns of which black pixel points are greater than a certain threshold in the row, carrying out image segmentation processing, and acquiring bronze ware golden text rubbing image data.
On the basis of the traditional utilization of the sift algorithm, because the algorithm has larger calculation amount of high-dimensional vectors and the complexity of matching time is increased, the inventor adopts an improved sift feature extraction algorithm in the step 2), and the improved sift feature extraction algorithm reduces the dimension on the basis of a sift feature descriptor, thereby achieving the advantages of high processing speed, strong robustness and the like to improve the matching precision. The matching time is reduced after the dimensionality reduction of the sift feature descriptor. The traditional sift algorithm has more mismatching in a data set, and the improved sift feature extraction algorithm has good uniqueness, anti-rotation capability and anti-noise capability.
The steps of adopting the improved sift feature extraction algorithm to carry out identification and matching are as follows:
a) firstly, taking a feature point as a center, extracting a concentric ring region with the radius of 8 as a feature point neighborhood range, taking two pixels as units, sequentially decreasing the radius, dividing the feature point neighborhood into concentric circles with 4 units, and enabling each grid to represent a pixel point.
The center of the concentric circle (FIG. 2) is taken as a feature point, and the feature point is expressed as M (p)1,p2) The diameter is 16 at the maximum, and the circular area can be expressed as:
(x-p1)2+(x-p2)2=r2(1)
the pixels in the same circle only change the positions of the pixels after the image is rotated, and other relative characteristics of the pixels basically keep unchanged, so that the image has good rotation invariance.
b) Calculating the modulus and the direction of the gradient of each pixel, counting 12 gradient direction accumulated values in each ring by using a gradient histogram to form 1 seed point, wherein each seed point is rich in vector information of 12 directions, and generating a total 4 multiplied by 12-48-dimensional feature vector.
The new feature descriptor performs weighting calculation on different circular rings, so that feature expression of key points is more specific, and compared with the sift feature descriptor, the finally obtained feature vector reduces calculation complexity and saves calculation time.
c) In order to avoid the sudden change of the feature descriptors due to small displacement of the feature point positioning, the feature descriptors need to be sorted in a circular left-shift mode, the maximum value of the innermost circular ring is shifted to the position of the first pixel point in the left direction, other concentric circles are also rotated in sequence, and therefore the sorting value is not changed after the image is rotated by any angle.
d) And finally, the vector is subjected to normalization processing, so that the influence of illumination change can be further reduced. Let M ' be a feature point descriptor, and M ' ═ M '1,m'2,...,m'48) Then, the normalization formula is:
Figure BDA0002260567310000061
e) when the similarity degree between two vectors is measured, a similarity function can be adopted, and the smaller the function value is, the larger the vector difference is, and the smaller the similarity degree is.
The cosine similarity measurement is used for measuring the similarity of the vector included angle by using the cosine value of the vector included angle. The larger the cosine value is, the smaller the included angle between the two vectors is, and the higher the similarity between the vectors is; the cosine value between two vectors can be derived by the euclidean dot product and the magnitude formula as shown in the following formula:
A·B=||A||||B||cosθ (3)。
the inventor adopts 2-group comparative experimental analysis, and the improved sift feature extraction algorithm proposed in the embodiment is shown in fig. 3 (a-d). And performing a characteristic matching and optimizing comparison experiment on the experiment picture, and performing objective evaluation on the four algorithms in three aspects of matching logarithm, accuracy and time consumption. The experimental result of the improved sift feature extraction algorithm is compared with the traditional sift algorithm, in fig. 3, a graph a and a graph c are the matching result of the traditional sift algorithm, and a graph b and a graph d are the matching result of the improved sift feature extraction algorithm.
Table 1 shows the results of the comparative experiment between the improved sift feature extraction algorithm and the traditional sift algorithm.
Table 1: comparison of matching results of two algorithms
Figure BDA0002260567310000071
As can be seen from table 1, the improved sift feature extraction algorithm adopted in the embodiment is higher in time efficiency than the conventional sift algorithm, which indicates that the improved sift algorithm is higher in time efficiency than the conventional algorithm, the improved algorithm reduces the operation complexity by reducing the dimension, the real-time performance is greatly improved, and the quick matching of bronze golden text images can be better completed. The method is obviously superior to the traditional method in matching accuracy and time complexity, and is more suitable for bronze object golden text image recognition and matching. Compared with the original algorithm, the improved sift feature extraction algorithm has good matching effect, the logarithm of the feature points is reduced to some extent, but the matching accuracy can be improved, and the high accuracy of the algorithm is ensured.

Claims (3)

1. A bronze object golden text image identification method based on sift algorithm is characterized by comprising the following steps:
acquiring bronze ware golden text rubbing image data by using an image segmentation method, and establishing a golden text data set;
step two, detecting sift characteristic points of the bronze ware golden text rubbing image by using an improved sift characteristic extraction algorithm, and describing the characteristic points;
and step three, performing feature point matching on the rubbing image feature points obtained in the step two by using a matching method of vector included angle cosine values, and obtaining an image matching result.
2. The method as claimed in claim 1, wherein the image segmentation method in the first step is to perform automatic threshold binarization processing on the bronze ware golden text rubbing image, find all rows and columns in which black pixels in the row are greater than a certain set threshold, perform image segmentation processing, and acquire bronze ware golden text rubbing image data.
3. The method as claimed in claim 1, wherein the step of identifying matches using the modified sift feature extraction algorithm in step two is as follows:
a) firstly, taking a feature point as a center, extracting a concentric ring region with the radius of 8 as a feature point neighborhood range, taking two pixels as units, sequentially decreasing the radius, dividing the feature point neighborhood into concentric circles with 4 units, and enabling each grid to represent a pixel point;
the center of the concentric circle is taken as a characteristic point, and the characteristic point is expressed as M (p)1,p2) The diameter is 16 at the maximum, and the circular area can be expressed as:
(x-p1)2+(x-p2)2=r2(1)
only the pixel position of the pixel in the same ring is changed after the image is rotated, and other relative characteristics of the pixel are basically kept unchanged, so that the rotation invariance is good;
b) calculating the modulus and the direction of the gradient of each pixel, counting 12 gradient direction accumulated values in each ring by using a gradient histogram to form 1 seed point, wherein each seed point is rich in vector information of 12 directions and generates a total 4 multiplied by 12-48-dimensional feature vector;
c) in order to avoid the sudden change of the feature descriptors caused by small displacement of the feature point positioning, the feature descriptors need to be sequenced in a circular left-shift mode, the maximum value of the innermost circular ring is shifted to the position of the first pixel point in the left direction, other concentric circles also rotate in sequence, and thus the sequencing value does not change after the image is rotated by any angle;
d) finally, the vector is normalized to further reduce the influence of the illumination change, where M ' is a feature point descriptor and M ' is (M '1,m'2,...,m'48) Then, the normalization formula is:
Figure FDA0002260567300000021
e) when the similarity degree between two vectors is measured, a similarity function is adopted, the smaller the function value is, the larger the vector difference is, and the smaller the similarity degree is;
measuring the similarity of the vectors by utilizing the cosine values of the included angles of the vectors, wherein the larger the cosine value is, the smaller the included angle between the two vectors is, and the higher the similarity between the vectors is; deriving the cosine value between the two vectors by using the Euclidean dot product and the magnitude formula as shown in the following formula:
A·B=||A||||B||cosθ (3)。
CN201911069702.4A 2019-11-05 2019-11-05 Bronze ware gold image recognition method based on sift algorithm Active CN111091133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911069702.4A CN111091133B (en) 2019-11-05 2019-11-05 Bronze ware gold image recognition method based on sift algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911069702.4A CN111091133B (en) 2019-11-05 2019-11-05 Bronze ware gold image recognition method based on sift algorithm

Publications (2)

Publication Number Publication Date
CN111091133A true CN111091133A (en) 2020-05-01
CN111091133B CN111091133B (en) 2023-05-30

Family

ID=70393081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911069702.4A Active CN111091133B (en) 2019-11-05 2019-11-05 Bronze ware gold image recognition method based on sift algorithm

Country Status (1)

Country Link
CN (1) CN111091133B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183585A (en) * 2020-09-08 2021-01-05 西安建筑科技大学 Bronze ware inscription similarity measurement method based on multi-feature measurement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136751A (en) * 2013-02-05 2013-06-05 电子科技大学 Improved scale invariant feature transform (SIFT) image feature matching algorithm
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
WO2019134327A1 (en) * 2018-01-03 2019-07-11 东北大学 Facial expression recognition feature extraction method employing edge detection and sift
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136751A (en) * 2013-02-05 2013-06-05 电子科技大学 Improved scale invariant feature transform (SIFT) image feature matching algorithm
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
WO2019134327A1 (en) * 2018-01-03 2019-07-11 东北大学 Facial expression recognition feature extraction method employing edge detection and sift
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁理想等: "基于余弦核函数的SIFT描述子改进算法", 《图学学报》 *
刘光鑫: "基于图像自相关矩阵的改进SIFT算法研究", 《中国新通信》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183585A (en) * 2020-09-08 2021-01-05 西安建筑科技大学 Bronze ware inscription similarity measurement method based on multi-feature measurement

Also Published As

Publication number Publication date
CN111091133B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
Alsmadi et al. Fish recognition based on robust features extraction from size and shape measurements using neural network
Alkhawlani et al. Content-based image retrieval using local features descriptors and bag-of-visual words
Dubey et al. Rotation and illumination invariant interleaved intensity order-based local descriptor
Cao et al. Similarity based leaf image retrieval using multiscale R-angle description
Zagoris et al. Segmentation-based historical handwritten word spotting using document-specific local features
WO2023103372A1 (en) Recognition method in state of wearing mask on human face
CN106874942B (en) Regular expression semantic-based target model rapid construction method
Keivani et al. Automated analysis of leaf shape, texture, and color features for plant classification.
Liu et al. Fingerprint indexing based on singular point correlation
Aptoula Bag of morphological words for content-based geographical retrieval
CN111091133A (en) Bronze object golden text image identification method based on sift algorithm
Zhu et al. Scene text detection via extremal region based double threshold convolutional network classification
Yanikoglu et al. Sabanci-Okan system at LifeCLEF 2014 plant identification competition
Mani et al. Design of a novel shape signature by farthest point angle for object recognition
Sundararajan et al. Continuous set of image processing methodology for efficient image retrieval using BOW SHIFT and SURF features for emerging image processing applications
CN108491888B (en) Environmental monitoring hyperspectral data spectrum section selection method based on morphological analysis
Aoulalay et al. Classification of Moroccan decorative patterns based on machine learning algorithms
Ramesh et al. Multiple object cues for high performance vector quantization
CN110705569A (en) Image local feature descriptor extraction method based on texture features
Zhang et al. Sketch-based image retrieval using contour segments
Guruprasad et al. Multimodal recognition framework: an accurate and powerful Nandinagari handwritten character recognition model
Park et al. Image retrieval technique using rearranged freeman chain code
El-Mashad et al. Evaluating the robustness of feature correspondence using different feature extractors
Chen et al. A multi-layer contrast analysis method for texture classification based on LBP
He et al. The binary image retrieval based on the improved shape context

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240229

Address after: 710000, Room D, 7th Floor, Building CD, Building 2, Xinqing Yayuan, 17A Yanta Road, Beilin District, Xi'an City, Shaanxi Province

Patentee after: Xi'an Qinghechuang Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: 710055 No. 13, Yanta Road, Shaanxi, Xi'an

Patentee before: XIAN University OF ARCHITECTURE AND TECHNOLOG

Country or region before: China

TR01 Transfer of patent right