CN113095380B - Image hash processing method based on adjacent gradient and structural features - Google Patents
Image hash processing method based on adjacent gradient and structural features Download PDFInfo
- Publication number
- CN113095380B CN113095380B CN202110327145.2A CN202110327145A CN113095380B CN 113095380 B CN113095380 B CN 113095380B CN 202110327145 A CN202110327145 A CN 202110327145A CN 113095380 B CN113095380 B CN 113095380B
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- hash
- gradient
- adjacent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 101150060512 SPATA6 gene Proteins 0.000 claims abstract description 77
- 238000000034 method Methods 0.000 claims abstract description 27
- 238000012545 processing Methods 0.000 claims abstract description 18
- 238000013139 quantization Methods 0.000 claims abstract description 10
- 230000006835 compression Effects 0.000 claims abstract description 8
- 238000007906 compression Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 72
- 238000004364 calculation method Methods 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 19
- 230000000007 visual effect Effects 0.000 claims description 6
- 238000013459 approach Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 230000017105 transposition Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 238000012937 correction Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 235000002566 Capsicum Nutrition 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 150000003839 salts Chemical class 0.000 description 2
- 241001504519 Papio ursinus Species 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 241000722363 Piper Species 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 241000758706 Piperaceae Species 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 208000014488 papillary tumor of the pineal region Diseases 0.000 description 1
- 229910052704 radon Inorganic materials 0.000 description 1
- SYUHGPGVQRZVTB-UHFFFAOYSA-N radon atom Chemical compound [Rn] SYUHGPGVQRZVTB-UHFFFAOYSA-N 0.000 description 1
- 238000013432 robust analysis Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image hash processing method based on adjacent gradient and structural features, which comprises the steps of reading an image in an image library and preprocessing the image; extracting three components from the preprocessed image, and obtaining the statistical characteristics of the image by using an adjacent gradient and a binarization quantization compression method; converting the preprocessed image into a color space, extracting a brightness component of the color space, and converting the brightness component into a three-dimensional image; extracting structural features of the image according to the brightness component image and the three-dimensional image; and combining the statistical characteristics and the structural characteristics to obtain intermediate hash, and performing position scrambling on the intermediate hash by using a random generator to obtain a final hash sequence. The method has robustness for processing most of the images with the content, blocks the images, does not have robustness for large-angle rotation, and has very low collision rate; for image authentication, image hash is formed by using features in an image library, and the image authentication method has high security performance.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image hash processing method based on a proximity gradient and a structural feature.
Background
In recent years, the content security problem of digital media has been receiving a lot of attention. With the rapid promotion of science and technology and networks, and intelligent multimedia, a plurality of image editing software is popularized and used, and more images and videos are generated by users and uploaded to the internet community. People can easily obtain a large amount of image information from the image, and can simply operate the image by utilizing various software such as photoshop, photomechanical operation and the like, such as adding text into the image, changing the brightness and contrast of the image, synthesizing a new image and the like. An image may produce many copies and it is therefore important how to distinguish between the images. Thus image hashing techniques have been developed to detect and classify images.
The outstanding algorithm that many researchers made in the image hash field, for example Lei et al combine Radon transform and Discrete Fourier Transform (DFT) to construct hash, and quantize and form hash by extracting invariant features after image transform and DFT coefficients of one-dimensional DFT transform [2] consider that the algorithm anti-rotation capability is increased while keeping distinctiveness, so that the maximum inscribed circle of an image is extracted to reconstruct an image block, a secondary image is converted to a frequency domain by DFT, and robust features are extracted from an amplitude matrix of the Fourier coefficients by using non-uniform sampling to form hash, and the like.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and title of the application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned problems of the conventional image hash processing.
Therefore, the technical problem solved by the invention is as follows: the traditional Hash image algorithm cannot detect color change and has weak robustness to noise; and on the other hand, the method is robust to large-angle rotation, but the running time is too long, and the collision rate is high.
In order to solve the technical problems, the invention provides the following technical scheme: reading images in an image library, and preprocessing the images; extracting three components from the preprocessed image, and obtaining the statistical characteristics of the image by using an adjacent gradient and a binarization quantization compression method; converting the preprocessed image into a color space, extracting a brightness component of the color space, and converting the brightness component into a three-dimensional image; extracting structural features of the image according to the brightness component image and the three-dimensional image; and combining the statistical characteristics and the structural characteristics to obtain intermediate hash, and performing position scrambling on the intermediate hash by using a random generator to obtain a final hash sequence.
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the preprocessing of the image comprises the steps of normalizing the image into the same size, and performing Gaussian low-pass filtering operation after the image is adjusted in size to reduce noise pollution.
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the gaussian low-pass filtering operation comprises filtering an image by using a gaussian low-pass filter with a template of 3 × 3 and a standard deviation σ of 1, wherein a calculation formula of the filtering process is as follows:
wherein: m G (i, j) is the value of the ith row and jth column element in the template.
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, the image hash processing method includes: the statistical characteristics of the obtained image comprise that R, G and B three-component I is extracted from the preprocessed image R 、I G 、I B And partitioning each component into blocks with sub-block size b × b, calculating I R Mean value of each block, and mean matrix M constituting each component R Then to I R The mean matrix calculates 2 adjacent gradients of a row adjacent gradient and a column adjacent gradient, and finally, the characteristic Z of each row of the row adjacent gradient is described by utilizing the mean value mu, the variance delta, the skewness s and the kurtosis omega through binarization quantization processing H And feature Z of each column adjacent to the gradient L Is constituted to further obtain I R Near gradient statistic S R =[Z H ,Z L ]Thereby obtaining an image I G 、 I B Are respectively S G ,S B Joint statistical feature H of jointly available RGB color images 1 =[S R , S G ,S B ]Length of L 1 =6×N/b。
As a group of the inventionA preferred embodiment of the image hash processing method for adjacent gradient and structural features, wherein: calculating 2 adjacent gradients of the mean matrix comprises that the row adjacent gradient and the column adjacent gradient of the mean matrix are g respectively H And g L Then the row is adjacent to the ith row vector g of the gradient matrix H (i) Column vector g of j th column of column adjacent gradient matrix L (j) The calculation formula of (a) is as follows:
g H (i)=M R (i+1,:)-M R (i,:)
g L (j)=M R (:,j+1)-M R (:,j)
wherein: m R (i,: is the row vector of the ith row of the mean matrix, M R (j) is the column vector of the jth column of the mean matrix, and when i = N/b, M is R (i + 1:) is M R (1,:); when j = N/b, M R (j + 1) is M R (:,1)。
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, the image hash processing method includes: the binarization quantization compression comprises the statistics of a whole matrix C and a mean value matrix Q m To obtain a line-adjacent gradient statistic D H =[d 1 ,d 2 ,d 3 ,...,d K ](ii) a To D H Carrying out binarization processing to obtain statistical characteristics Z of line approach gradient H =[z 1 ,z 2 ,…,z K ]The calculation formula is as follows:
wherein: z is a radical of H (i) Is Z H When i = K, d H (i+1)=d H (1)。
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the extracting of the brightness component comprises the steps of setting the brightness component of an image as Y, converting the Y into a three-dimensional space to extract structural features, setting the transverse position of a coordinate of the Y component as an x axis, setting the longitudinal position of the Y component as a Y axis, setting pixel point values corresponding to the coordinate as a z axis, constructing a three-dimensional curve feature diagram, drawing a peak-top curve and a peak-valley curve of the Y component projected on xoz and yoz planes under 2 visual angles, and simultaneously segmenting the three-dimensional curve of the Y component by utilizing an equidistant slice plane parallel to the yoz plane to obtain a segmentation diagram, thereby extracting the structural features of the image.
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the image structure feature extraction method comprises the steps of dividing a Y component image into a series of non-overlapping small blocks, enabling the size of each block to be bxb, carrying out mean value calculation on pixel values of each small block to obtain a feature matrix M, obtaining a peak top curve and a peak valley curve of the feature matrix M under xoz and yoz projection, carrying out concave-convex point set calculation on the peak top curve and the peak valley curve under different projection to obtain a combined set, obtaining position information on an xoy surface through the combined set of the concave-convex point set, combining and carrying out binarization to obtain local structure feature Z 1 (ii) a In order to extract the position information of the concave-convex point set under different projections, the position characteristics are constructed, as the structure characteristic point characteristics are extracted, the intersection of the position information of different visual angles on the xoy plane is obtained, an intersection matrix is obtained, the row and parallel conversion operation is carried out on the intersection matrix, and then the position matrix Z is obtained 2 So as to obtain local structural feature Z = [ Z = [ [ Z ] 1 ,Z 2 ](ii) a And equally dividing the three-dimensional image into K = N/b section matrixes by using a plane parallel to the xoz axis, and obtaining the overall characteristic S by counting the pixel point sets of the number of the pixel points contained in each section and the variance of each section matrix and carrying out binarization so as to obtain the structural characteristic H 2 =[Z 1 ,Z 2 ,S]Length L of 2 =6×N/b-2。
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the overall characteristics comprise a pixel point set obtained by counting the number of pixel points contained in each sectionStatistic variance ^ in each area matrix>Will S 1 And S 2 Respectively binarized to obtain S 3 ,S 4 ,
As a preferable scheme of the image hash processing method based on the adjacent gradient and the structural feature, in the invention: the final hash sequence comprises the adjacent gradient statistical characteristic H 1 And structural feature H 2 Get intermediate Hash H m Is shown as H m =[H 1 ,H 2 ]Of length L = L 1 +L 2 =12 xn/b-2 bits, and then generates a key K of length L using a random generator, and performs an intermediate hash H according to the following formula m And (3) calling the ith bit value after scrambling the secret key K to perform position indexing:
h(i)=H m (K[i])
wherein: k [ i ]]H representing the ith number in the pseudo-random number sequence K to be indexed m And assigning the value to the ith position of the new hash sequence H for position scrambling to obtain a final hash sequence.
The invention has the beneficial effects that: the method has robustness for processing most of the images with the content, and blocks the images, so that the method has no robustness for large-angle rotation and has very low collision rate; the invention can also be used for image authentication, and utilizes the characteristics in the image library to form image hash, thereby having higher safety performance.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor. Wherein:
fig. 1 is a schematic basic flowchart of an image hash processing method based on neighboring gradients and structural features according to a first embodiment of the present invention;
fig. 2 is an image hash block diagram of an image hash processing method based on adjacent gradients and structural features according to a first embodiment of the present invention;
FIG. 3 is a three-dimensional image of the Y component of an image hash processing method based on neighboring gradients and structural features according to a first embodiment of the present invention;
FIG. 4 is a projection graph of the Y component of the image hashing processing method based on the adjacent gradient and the structural feature according to the first embodiment of the present invention;
FIG. 5 is a diagram illustrating the effect of conventional image processing on Hash based on the image Hash processing method of neighboring gradients and structural features according to a second embodiment of the present invention;
FIG. 6 is a diagram illustrating the result of uniqueness analysis of an image hashing method based on neighboring gradients and structural features according to a second embodiment of the present invention;
FIG. 7 is a graph comparing ROC curves of different algorithms for an image hash processing method based on neighboring gradients and structural features according to a second embodiment of the present invention;
fig. 8 shows a lens and 10 attacks thereof to obtain an image based on an image hash processing method of adjacent gradients and structural features according to a second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1 to 4, for an embodiment of the present invention, an image hash processing method based on adjacent gradients and structural features is provided, including:
s1: reading the images in the image library and preprocessing the images. In which it is to be noted that,
preprocessing the image includes normalizing the image to the same size and performing a gaussian low pass filtering operation after the image is resized to reduce noise pollution.
The gaussian low-pass filtering operation includes filtering the image by using a gaussian low-pass filter with a template of 3 × 3 and a standard deviation σ of 1, where a calculation formula of a filtering process is as follows:
wherein: m G (i, j) is the value of the ith row and jth column element in the template.
S2: and extracting three components from the preprocessed image, and obtaining the statistical characteristics of the image by using a proximity gradient and a binarization quantization compression method. In which it is to be noted that,
the statistical characteristics of the image are obtained by extracting three components I of R, G and B from the preprocessed image R 、I G 、I B And partitioning each component into blocks with sub-block size b × b, calculating I R The mean value of each block is obtained to obtain a mean value matrix M of each component R Mean matrix M R Is represented as follows:
wherein: m is i,j Taking the pixel value of the j column of the ith row as the mean value matrix, and then comparing I R The mean matrix of (a) calculates the row approach gradient g H Column-sum neighboring gradient g L :
g H (i)=M R (i+1,:)-M R (i,:)
g L (j)=M R (:,j+1)-M R (:,j)
Wherein: g H (i)、g L (j) I row vector of the row adjacent gradient matrix and j column vector, M, of the column adjacent gradient matrix R (i,: is a row vector, M, for the ith row of the mean matrix R (j) is the column vector of the jth column of the mean matrix, when i = N/b, M R (i + 1): is M R (1,: M) when j = N/b R (j + 1) is M R (:,1)。
Finally by using the mean value mu H Variance δ H Degree of deviation s H Degree of kurtosis ω H So as to obtain a characteristic vector V describing the ith row of the row adjacent to the gradient H (i)=[μ k (i),δ k (i),s k (i),ω k (i)] T Finally, these eigenvectors are normalized and arranged to obtain an eigenvector matrix C = [ C ] with a size of 4 × K 1 ,c 2 ,c 3 ,…,c K ]Wherein mean value μ H Variance δ H Degree of deviation s H Degree of kurtosis ω H Expressed as:
wherein: k = N/b, and the matrix C is averaged to obtain a matrix Q m =[q 1 ,q 2 ,q 3 ,q 4 ] T
Wherein: q. q.s m (j) Is Q m The j element of (2), C i (j) Is the jth element of the ith column element of C, and K is Q m The total number of elements contained in the total matrix C and the mean matrix Q are counted m To obtain the line-adjacent gradient statistic D H =[d 1 ,d 2 ,d 3 ,...,d K ],d H (i) Is D H The ith element of (2)
To D H Carrying out binarization processing to obtain statistical characteristics Z of line adjacent gradient H =[z 1 ,z 2 ,…,z K ]The calculation formula is expressed as:
wherein: z is a radical of H (i) Is Z H When i = K, let d H (i+1)=d H (1) Similarly, the statistical characteristic Z of the near gradient of the IR row can be obtained L Thereby obtaining I R Near gradient statistic S R =[Z H ,Z L ]Thereby obtaining an image I G 、I B Are respectively S G ,S B Joint statistical feature H of jointly available RGB color images 1 =[S R ,S G ,S B ]Length of L 1 =6×N/b。
S3: and converting the preprocessed image into a color space, extracting a brightness component of the color space, and converting the brightness component into a three-dimensional image. In which it is to be noted that,
extracting the brightness component of the image comprises setting the brightness component of the image as Y, referring to fig. 3, converting Y into a three-dimensional space to extract structural features, setting the transverse position of the coordinate of the Y component as an x-axis, setting the longitudinal position as a Y-axis, setting the pixel point value corresponding to the coordinate as a z-axis, constructing a three-dimensional curve feature map, referring to fig. 4, drawing a peak-top curve and a peak-valley curve of the Y component projected on xoz and yoz planes under 2 visual angles, and simultaneously segmenting the three-dimensional curve of the Y component by utilizing an equidistant slice plane parallel to a yoz plane to obtain a segmentation map, thereby extracting the structural features of the image.
S4: and extracting the structural features of the image according to the brightness component image and the three-dimensional image. In which it is to be noted that,
the image structure feature extraction method comprises the steps of dividing a preprocessed Y component image into a series of non-overlapping small blocks, wherein the block size is b multiplied by b, carrying out mean value calculation on the pixel value of each small block to obtain a feature matrix M and a feature matrix, and obtaining a peak top curve and a peak valley curve of M under the projection of xoz and yoz, wherein the calculation formula is as follows:
wherein:and &>Respectively a peak-top curve and a peak-valley curve under xoz projection>And &>Respectively a peak-top curve and a peak-valley curve under the yoz projection, max (·, 1), min (·, 1) are respectively operations for taking the maximum value and the minimum value of the matrix M according to rows, and max (·, 2) and min (·, 2) are respectively operations for taking the maximum value and the minimum value of the matrix M according to columns; then, the peak-top curve and the peak-valley curve under different projections are subjected to concave-convex point set solving, and the peak-top curve under the xoz projection is firstly subjected to the judgment of the curve>And (3) solving a concave-convex point set and directly carrying out binarization, wherein the calculation formula is as follows: />
Wherein:is paired with>Binarization is carried out after the concave-convex point set is solved, i is ^ H>When i-1 is not less than 0 or i +1 is more than N/b, namely the pixel point is positioned on the boundary, the size of the pixel point is only required to be compared with the size of an internal adjacent pixel point to judge whether the pixel point is a concave-convex point or not; so that the concave-convex point sets of the peak-valley curve of the xoz surface, the peak-top curve of the yoz surface and the peak-valley curve are sequentially-> And &>Because concave and convex points of the image under different projections are different, in order to compress the hash length of the algorithm and keep the good classification performance of the image, the concave and convex points of the peak-top curve and the concave and convex points of the peak-valley curve under different projections are subjected to merging processing to obtain the characteristic of the local characteristic point (based on the characteristic of the local characteristic point-> And &>Combined to obtain local feature point features Z 1 =[A,B]。
In order to improve the distinctiveness of the algorithm, the position information of the concave-convex point set under different projections is extracted, and the position characteristics are constructed, and the specific steps are as follows: extracting concave-convex point set of peak-valley curve under xoz projectionPosition information matrix P = [ P ] on xoy plane 1 ,P 2 ,…,P N/b ]In which P is i =[p 1 ,p 2 ,…,p N/b ] T (i =1,2, \ 8230;, N/b), and the peak-to-valley curve in the yoz projection->Position information matrix U = [ U ] in xoy plane 1 ,U 2 ,…,U M/b ]Wherein U is i =[u 1 ,u 2 ,…,u N/b ] T (i =1,2, \8230;, N/b), and similarly, the set of valley curve patches in the xoz projection @>Position information matrix Q = [ Q ] in xoy plane 1 ,Q 2 ,…,Q N/b ]Set of concave-convex points on peak-valley curve projected with yoz>Position information matrix V = [ V ] in xoy plane 1 ,V 2 ,…,V N/b ]。
Then, as with the extraction of the structural feature point features, intersection is calculated according to the position information of different visual angles on the xoy plane; wherein D 1 = Pu is the intersection matrix of matrix P and matrix U, D 2 = Q $ U.V is intersection matrix of matrix Q and matrix V, and then the solved matrix D is compared 1 ,D 2 By performing a row-and-parallel transposition operation Will D 3 And D 4 Jointly forming a position matrix> So that local structural feature Z = [ Z ] can be obtained 1 ,Z 2 ]。
In order to enrich the features of the extracted image, the global features of the whole image are extracted as follows: firstly, a three-dimensional image is equally divided into i = N/b section matrixes by using a plane parallel to an xoz axis, and a pixel point set of the number of pixel points contained in each section is countedThe variance of each area matrix is countedWill S 1 And S 2 Respectively binarized to obtain S 3 ,S 4 The calculation formula is as follows:
combined to obtain integral featuresBy combining local structural features Z = [ Z ] 1 ,Z 2 ]Including characteristic point Z 1 And position feature Z 2 And obtaining a structural feature H from the global feature S containing the pixel point set and variance of each tangent plane 2 =[Z 1 ,Z 2 ,S]Length of L 2 =6×N/b-2。
S5: and combining the statistical characteristics and the structural characteristics to obtain intermediate hash, and performing position scrambling on the intermediate hash by using a random generator to obtain a final hash sequence. In which it is to be noted that,
the final hash sequence includes, in combination, the adjacent gradient statistic H 1 And structural feature H 2 Get intermediate Hash H m Is shown as H m =[H 1 ,H 2 ]Length of L = L 1 +L 2 =12 xn/b-2 bits, and then generates a key K of length L using a random generator, and performs an intermediate hash H according to the following formula m And (3) calling the ith bit value after scrambling the secret key K to perform position indexing:
h(i)=H m (K[i])
wherein: k [ i ]]H representing the ith number in the pseudo-random number sequence K to be indexed m And assigning the value to the ith position of the new hash sequence H for position scrambling to obtain the final hash sequence.
Example 2
Fig. 5 to 8 show another embodiment of the present invention, which is to verify and explain the technical effects adopted in the method, and verify the actual effects of the method by means of scientific demonstration.
In carrying out the experiment, the parameters were first set as follows: gaussian with normalized image size N =256,3 × 3Low pass filtering with standard deviation of 1, image subblock size b =8, so hash length L = L 1 +L 2 =12× N/b-2=382bits。
Firstly, robustness analysis of the Hash image is carried out, 5 test images Airplane, house, lena, baboon and Peppers with the length of 512 multiplied by 512 are selected to carry out various conventional processes, robustness attack is carried out on each standard image according to the table 1 to obtain 66 similar images, the standard image and the 66 similar images form a similar image pair, and the Hash Hamming distance of the similar image pair is calculated.
Table 1: parameters used in various conventional image processing in robust performance analysis.
Image processing | Parameter(s) | Parameter value | Number of |
Brightness adjustment | Rank of | -20-101020 | 4 |
Contrast adjustment | Rank of | -20-101020 | 4 |
Gamma correction | Gamma value | 0.750.91.11.25 | 4 |
JPEG compression | Quality factor | 3040…90100 | 8 |
Image scaling | Ratio of | 0.60.81.21.41.61.8 | 6 |
Noise of spiced salt | Rank of | 0.0020.004…0.01 | 5 |
Multiplicative noise | Variance (variance) | 0.0020.004…0.01 | 5 |
Gaussian noise | Mean value | 0.0020.004…0.01 | 5 |
Gaussian low pass filtering | Standard deviation of | 0.10.20.3…0.91 | 10 |
Mean value filtering | Form panel | 3×35×57×79×9 | 4 |
Watermark embedding | Transparency of the coating | 0.30.4…0.70.8 | 6 |
Rotate | Angle of rotation | 0.20.40.812 | 5 |
The Hash of the original image and the Hash of the image after different processing are calculated to obtain the distance, referring to FIG. 5, the serial number of the horizontal axis in the graph corresponds to various processing listed in Table 1, the vertical axis represents the Hash distance, it can be seen that most of Hash caused by processing except image rotation has little change, the distance is about 80, and due to the adoption of the block scheme, when the rotation angle is larger than 1, the Hash distance rises sharply, the rotation makes the content of the graph block change significantly, which means that the algorithm has better robustness to attack operations except large-angle rotation.
The uniqueness of the Hash image, also called collision, is analyzed, i.e. two images with different contents should have completely different image hashes, see fig. 6, C generated for 1000 different images 2 1000 Probability distribution of Hash distances for 499500 image pairs, it can be seen from fig. 6 that the hamming distances of different image pairs are 253 at the maximum value, 112 at the minimum value, 182.57 and 15.34 at the mean and standard deviation, respectively, and the distances are all substantially greater than 80.
Analyzing from the aspect of threshold of a Hash image, firstly establishing a Hash distance data set, wherein the Hash distance data set comprises 499500 different image pairs and 210000 similar image pairs, when an experiment is carried out, the adopted similar image pairs comprise attacks such as JPEG, gamma correction, image scaling and the like, attack parameters are shown in the following table 2, as can be seen from the table 2, the distance minimum value of the similar image pairs is 0, the maximum value is 135, a threshold T range is obtained from 112 to 135 through a robustness experiment and a uniqueness experiment, in order to obtain an optimal threshold to distinguish the similar image pairs from the different image pairs, the collision rate and the error detection rate are introduced to analyze the algorithm performance, and the calculation formula is as follows:
the data set was analyzed using the formula, the analysis results of which are shown in table 3 below,
table 2: attack operations and parameter settings.
Table 3: collision and error detection rates of different thresholds.
Threshold value | Collision rate P Error detection rate | Error detection rate P Rate of collision |
112 | 0 | 2.05×10 -4 |
118 | 1.80×10 -6 | 1.1×10 -4 |
122 | 4.00×10 -6 | 6.19×10 -3 |
130 | 2.44×10 -5 | 1.43×10 -5 |
135 | 8.11×10 -3 | 0 |
As can be seen from table 3, the resulting optimal threshold is 122.
For further beneficial effects of the invention, the traditional CS-LBP algorithm, the Itti-Hu algorithm, the TD algorithm and the SG algorithm are selected to be compared with the invention, and in order to ensure the reasonability and fairness of the experiment, all the algorithms are operated under a unified experimental platform and setting, 499500 different image pairs and 210000 similar image pairs are used for verifying the classification performance of the algorithms, and the error acceptance rate PFPR and the correct acceptance rate PTPR are adopted for evaluation, and the calculation formula is expressed as follows,
and calculating ROC curves of different algorithms, and referring to a comparison graph 7, as can be seen from the graph, the closer the ROC curve of the image is to the upper left corner, the better the classification performance is, the higher the correct acceptance rate is and the lower the false acceptance rate is, and meanwhile compared with other algorithms, the method disclosed by the invention is closer to the upper left corner of the ROC curve graph, namely, the better classification performance is achieved, the near gradient features of the color image are extracted in an outstanding manner by the algorithm, rich statistical features are extracted to enhance robustness, local features are extracted to increase algorithm distinctiveness, and the robustness and distinctiveness of the image are taken into account by the algorithm in image extraction.
In order to test the image retrieval performance of the method of the present invention, 1010 images including 1000 different images and 10 operations on the Lena original image, such as Lena-JPEG5, lena-Gamma correction 0.75, lena-contrast +20, lena-gaussian low-pass filtering 0.5, lena-gaussian noise 0.002, lena-salt and pepper noise 0.002, lena-brightness +20, lena-multiplicative noise 0.002, lena-watermark 5, and Lena-zoom 0.8, are used as a database, referring to fig. 8, image retrieval is performed on the original image Lena as a retrieval image, and table 4 shows some examples of image retrieval:
table 4: the original image is compared to the 1010 test image and the Hash distance.
Sequence of | Image of a person | Distance between two |
1 | |
0 |
2 | Watermark 5 | 11 |
3 | Luminance +20 | 18 |
4 | Contrast +20 | 22 |
5 | Gaussian noise 0.002 | 22 |
6 | JPEG5 | 24 |
7 | Gaussian low pass filtering 0.5 | 26 |
8 | Scaling 0.8 | 26 |
9 | Multiplicative noise 0.002 | 30 |
10 | Gamma correction of 0.75 | 37 |
11 | Noise of spiced salt is 0.002 | 49 |
12 | Other images | >122 |
It can be seen that the distance of all image pairs, except the image subjected to the attack operation, is greater than the determined threshold T =122.
Therefore, the method has the advantages of better robustness, better balance between robustness and distinguishability, shorter hash length, higher operation speed and the like, can detect the copied image, and can be widely applied to the fields of image authentication and image retrieval.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.
Claims (10)
1. An image hash processing method based on adjacent gradient and structural features, comprising:
reading images in an image library, and preprocessing the images;
extracting three components from the preprocessed image, and obtaining the statistical characteristics of the image by using a near gradient and a binarization quantization compression method;
extracting three components of R, G and B from the preprocessed image, respectively carrying out adjacent gradient on the three components of R, G and B, then carrying out binarization quantization compression on three adjacent gradient information obtained by calculation, and taking compressed data as statistical characteristics;
converting the preprocessed image into a color space, extracting a brightness component of the color space, and converting the brightness component into a three-dimensional image;
extracting structural features of the image according to the brightness component image and the three-dimensional image;
converting the preprocessed image into a color space YCbCr, extracting a brightness component Y of the YCbCr, dividing the Y component image into a series of non-overlapped small blocks, wherein the block size is b multiplied by b, carrying out mean calculation on the pixel value of each small block to obtain a characteristic matrix M, constructing a three-dimensional image by taking the coordinate horizontal position of the characteristic matrix M as an x axis and the longitudinal position as a Y axis and the pixel value under the corresponding coordinate as a z axis, respectively obtaining a peak-top curve and a peak-valley curve of the matrix M under the projection of xoz and yoz surfaces of the three-dimensional image, respectively carrying out concave-convex point set calculation on the peak-top curve and the peak-valley curve under different projections and taking a union set, and jointly and binarily calculating local structural features; in order to extract position information of concave-convex point sets under different projections to construct position characteristics, intersection is calculated on the position information of the concave-convex point sets under different viewing angles on the xoy plane to obtain an intersection matrix, and row calculation and parallel transposition operations are carried out on the intersection matrix to further obtain the position characteristics; simultaneously, equally dividing the three-dimensional image into K = N/b section matrixes by using a plane parallel to the xoz axis, wherein N is the length of the image, obtaining an overall characteristic S by counting the pixel point set of the number of pixel points contained in each section and the variance of each section matrix and carrying out binarization, and combining the local structural characteristic, the position characteristic and the overall characteristic to obtain the structural characteristic;
and combining the statistical characteristics and the structural characteristics to obtain intermediate hash, and performing position scrambling on the intermediate hash by using a random generator to obtain a final hash sequence.
2. The image hash processing method based on neighboring gradients and structural features of claim 1, wherein: the pre-processing of the image may include,
and normalizing the images into the same size, and performing Gaussian low-pass filtering operation after the image is resized to reduce noise pollution.
3. The image hash processing method based on the neighboring gradient and the structural feature of claim 2, wherein: the gaussian low-pass filtering operation comprises,
filtering the image by using a Gaussian low-pass filter with a template of 3 × 3 and a standard deviation sigma of 1, wherein the calculation formula of the filtering process is as follows:
wherein: m G (i, j) is the value of the ith row and jth column element in the template;
the filtering process is as follows: the method comprises the steps of calculating the numerical value of each point of a template through a Gaussian filtering template with the fixed size of 3 x 3 and then processing each pixel point of an image in an iteration mode, and thus obtaining the filtered image.
4. The image hash processing method based on the proximity gradient and the structural feature as claimed in any one of claims 1 to 3, wherein: the statistical characteristics of the obtained images include,
extracting R, G and B three-component I from the preprocessed image R 、I G 、I B And partitioning each component into blocks with sub-block size b × b, calculating I R Mean value of each block, and mean matrix M constituting each component R Then to I R The mean matrix calculates 2 adjacent gradients of a row adjacent gradient and a column adjacent gradient, after the mean mu, the variance delta, the skewness s and the kurtosis omega are calculated for the 2 adjacent gradients, the matrix formed by the mean mu, the variance delta, the skewness s and the kurtosis omega is subjected to binarization quantization processing, and finally the characteristic Z of each row of the row adjacent gradient is described by using the mean mu, the variance delta, the skewness s and the kurtosis omega through binarization quantization processing H And feature Z of each column adjacent to the gradient L Is composed ofTo obtain I R Near gradient statistic S R =[Z H ,Z L ]Thereby obtaining an image I G 、I B Are respectively S G ,S B Joint statistical feature H of jointly available RGB color images 1 =[S R ,S G ,S B ]Length of L 1 =6 × N/b, where N is the length of the image.
5. The image hash processing method based on neighboring gradients and structural features of claim 4, wherein: the pair I R Of the mean matrix M R Calculating 2 neighboring gradients, including:
the row adjacent gradient and the column adjacent gradient of the mean matrix are respectively g H And g L Then the row is adjacent to the ith row vector g of the gradient matrix H (i) Column vector g of j-th column of column-adjacent gradient matrix L (j) The calculation formula of (a) is as follows:
g H (i)=M R (i+1,:)-M R (i,:)
g L (j)=M R (:,j+1)-M R (:,j)
wherein: m is a group of R (i,: is a row vector, M, for the ith row of the mean matrix R (j) is the column vector of the jth column of the mean matrix, and when i = N/b, M is R (i + 1:) is M R (1,:); when j = N/b, M R (j + 1) is M R (:,1)。
6. The method of image hashing based on adjacent gradients and structural features according to claim 5, wherein: the binary quantization compression comprises the steps of,
the whole matrix C and the mean value matrix Q are counted m To obtain a line-adjacent gradient statistic D H =[d 1 ,d 2 ,d 3 ,…,d K ]Where K is the feature vector D H The total number of elements contained; to D H Carrying out binarization processing to obtain statistical characteristics Z of line approach gradient H =[z 1 ,z 2 ,…,z K ]Which calculatesThe formula is as follows:
wherein: z is a radical of H (i) Is Z H When i = K, d H (i+1)=d H (1)。
7. The method for image hashing based on adjacent gradients and structural features according to any one of claims 1 to 3 and 5 to 6, wherein: the extracting of the luminance component thereof includes,
the method comprises the steps of setting a brightness component of an image as Y, converting the Y into a three-dimensional space to extract structural features, setting a coordinate transverse position of the Y component as an x axis, setting a longitudinal position as a Y axis, setting pixel point values corresponding to coordinates as a z axis, constructing a three-dimensional curve feature graph, drawing a peak-top curve and a peak-valley curve of the Y component projected on xoz and yoz planes under 2 visual angles, and simultaneously segmenting the Y component three-dimensional curve by utilizing an equidistant slice plane parallel to the yoz plane to obtain a segmentation graph, so that the structural features of the image are extracted.
8. The method of image hashing based on adjacent gradients and structural features according to claim 7, wherein: the extracting the structural features of the image comprises the steps of,
dividing the Y component image into a series of non-overlapped small blocks, wherein the block size is b multiplied by b, calculating the average value of the pixel value of each small block to obtain a characteristic matrix M, calculating the peak top curve and the peak valley curve of the characteristic matrix M under the projection of xoz and yoz, calculating the concave-convex point set and the concave-convex point set of the peak top curve and the peak valley curve under different projections, calculating the position information on the xoy surface from the concave-convex point set, combining and binarizing to obtain the local structural characteristic Z 1 (ii) a In order to extract the position information of the concave-convex point set under different projections, the position characteristics are constructed, as the structure characteristic point characteristics are extracted, the position information of different visual angles on the xoy plane is intersected to obtain an intersection matrix, and the intersection matrix is intersectedRow and parallel operations are calculated to obtain a position matrix Z 2 So as to obtain local structural feature Z = [ Z = [ [ Z ] 1 ,Z 2 ](ii) a And equally dividing the three-dimensional image into K = N/b section matrixes by using a plane parallel to the xoz axis, and obtaining the overall characteristic S by counting the pixel point sets of the number of the pixel points contained in each section and the variance of each section matrix and carrying out binarization so as to obtain the structural characteristic H 2 =[Z 1 ,Z 2 ,S]Length L of 2 =6×N/b-2。
9. The method of image hashing based on adjacent gradients and structural features according to claim 8, wherein: the overall characteristics include, among others,
by counting the pixel point set of the number of the pixel points contained in each sectionStatistic variance of each face matrix ≥>Will S 1 And S 2 Respectively binarized to obtain S 3 ,S 4 ,
10. The image hash processing method based on neighboring gradients and structural features of claim 9, wherein: the final hash-sequence may comprise a hash of,
combining the near gradient statistical features H 1 And structural feature H 2 Get intermediate Hash H m Is represented by H m =[H 1 ,H 2 ]Of length L = L 1 +L 2 =12 xn/b-2 bits, and then generates a key K of length L using a random generator, and performs an intermediate hash H according to the following formula m And (3) calling the ith bit value after scrambling the secret key K to perform position indexing:
h(i)=H m (K[i])
wherein: k [ i ]]H to be indexed representing ith number in pseudo-random number sequence K m And assigning the value to the ith position of the new hash sequence H for position scrambling to obtain the final hash sequence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110327145.2A CN113095380B (en) | 2021-03-26 | 2021-03-26 | Image hash processing method based on adjacent gradient and structural features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110327145.2A CN113095380B (en) | 2021-03-26 | 2021-03-26 | Image hash processing method based on adjacent gradient and structural features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113095380A CN113095380A (en) | 2021-07-09 |
CN113095380B true CN113095380B (en) | 2023-03-31 |
Family
ID=76670187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110327145.2A Active CN113095380B (en) | 2021-03-26 | 2021-03-26 | Image hash processing method based on adjacent gradient and structural features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113095380B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117034367B (en) * | 2023-10-09 | 2024-01-26 | 北京点聚信息技术有限公司 | Electronic seal key management method |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490789A (en) * | 2019-07-15 | 2019-11-22 | 上海电力学院 | A kind of image hashing acquisition methods based on color and structure feature |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734728A (en) * | 2018-04-25 | 2018-11-02 | 西北工业大学 | A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image |
CN111177432B (en) * | 2019-12-23 | 2020-11-03 | 北京航空航天大学 | Large-scale image retrieval method based on hierarchical depth hash |
CN111429337B (en) * | 2020-02-28 | 2022-06-21 | 上海电力大学 | Image hash acquisition method based on transform domain and shape characteristics |
CN111787179B (en) * | 2020-05-30 | 2022-02-15 | 上海电力大学 | Image hash acquisition method, image security authentication method and device |
CN112232428B (en) * | 2020-10-23 | 2021-11-16 | 上海电力大学 | Image hash acquisition method based on three-dimensional characteristics and energy change characteristics |
-
2021
- 2021-03-26 CN CN202110327145.2A patent/CN113095380B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490789A (en) * | 2019-07-15 | 2019-11-22 | 上海电力学院 | A kind of image hashing acquisition methods based on color and structure feature |
Also Published As
Publication number | Publication date |
---|---|
CN113095380A (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Robust image hashing based on random Gabor filtering and dithered lattice vector quantization | |
Gilani et al. | Learning from millions of 3D scans for large-scale 3D face recognition | |
Monga et al. | Perceptual image hashing via feature points: performance evaluation and tradeoffs | |
Dua et al. | Image forgery detection based on statistical features of block DCT coefficients | |
JP2694101B2 (en) | Method and apparatus for pattern recognition and validation | |
Singh et al. | Fast and efficient region duplication detection in digital images using sub-blocking method | |
Huang et al. | Perceptual image hashing with texture and invariant vector distance for copy detection | |
CN110287780B (en) | Method for extracting facial image features under illumination | |
CN106548445A (en) | Spatial domain picture general steganalysis method based on content | |
Mohamed et al. | An improved LBP algorithm for avatar face recognition | |
CN112232428B (en) | Image hash acquisition method based on three-dimensional characteristics and energy change characteristics | |
CN102693522A (en) | Method for detecting region duplication and forgery of color image | |
Tang et al. | Robust Image Hashing via Random Gabor Filtering and DWT. | |
Liu | An improved approach to exposing JPEG seam carving under recompression | |
Samanta et al. | Analysis of perceptual hashing algorithms in image manipulation detection | |
Hou et al. | Detection of hue modification using photo response nonuniformity | |
Deng et al. | Deep multi-scale discriminative networks for double JPEG compression forensics | |
CN113095380B (en) | Image hash processing method based on adjacent gradient and structural features | |
Tang et al. | Robust image hashing via visual attention model and ring partition | |
Yuan et al. | Perceptual image hashing based on three-dimensional global features and image energy | |
Liang et al. | Robust hashing with local tangent space alignment for image copy detection | |
Doegar et al. | Image forgery detection based on fusion of lightweight deep learning models | |
Shang et al. | Double JPEG detection using high order statistic features | |
Ustubıoglu et al. | Image forgery detection using colour moments | |
CN106952211B (en) | Compact image hashing method based on feature point projection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |