CN110532409B - Image retrieval method based on heterogeneous bilinear attention network - Google Patents

Image retrieval method based on heterogeneous bilinear attention network Download PDF

Info

Publication number
CN110532409B
CN110532409B CN201910692241.XA CN201910692241A CN110532409B CN 110532409 B CN110532409 B CN 110532409B CN 201910692241 A CN201910692241 A CN 201910692241A CN 110532409 B CN110532409 B CN 110532409B
Authority
CN
China
Prior art keywords
bilinear
image
vector
network
branches
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910692241.XA
Other languages
Chinese (zh)
Other versions
CN110532409A (en
Inventor
王鹏
苏海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201910692241.XA priority Critical patent/CN110532409B/en
Publication of CN110532409A publication Critical patent/CN110532409A/en
Application granted granted Critical
Publication of CN110532409B publication Critical patent/CN110532409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to an image retrieval method based on a heterogeneous bilinear attention network, which has two special branches, wherein one branch provides key area position information, and the other branch provides attribute level description. The output results of the two branches are integrated into an image-level representation by a bilinear module based on attention mechanism. Two auxiliary tasks are used to pre-train the two branches to ensure that they have the ability to achieve critical region localization and attribute description. The first branch adopts a hour-glass network to realize the task of detecting the key area; the second branch uses the inclusion-Resnet-v 2 network to implement attribute prediction. The invention uses the attention mechanism of channel-wise driven by two branches to weight the channels of the output representation of the two branches, and then integrates the weighted representation into the final representation of the image level by using a compact bilinear pooling method. And then calculating Euclidean distances among the representations of different images and sequencing to obtain a final retrieval result.

Description

Image retrieval method based on heterogeneous bilinear attention network
Technical Field
The invention belongs to the field of content-based image retrieval, and particularly relates to a compact bilinear pooling image retrieval method and a system for optimizing a channel attention mechanism of heterogeneous characteristics and interactively modeling optimized characteristics of two heterogeneous branches.
Background
Content-based image retrieval can effectively help users browse and find their own desired images from a large database of images. It has great commercial value and has therefore attracted much research interest in recent years. However, for each image of the database, they are often acquired under different lighting conditions, different shooting angles, and cluttered backgrounds. In addition, the differences in the images are often expressed in detail, for example, the neckline pattern of a garment has a variety of formats: a round collar, a V-collar, a boat collar, etc. These phenomena present a great challenge to the image retrieval work. These challenges can be summarized as two problems: "where to see" and "how to describe". "where to look" mainly addresses how to find key parts of an object. Often, an image contains multiple key portions of the search object, and one can distinguish the two images by comparing the visual appearance of these key portions. How to describe is to describe the visual content of the image, so that the retrieval system is free from the influence of factors such as illumination, background, posture, visual angle and the like, and is more focused on the attribute aspect of the retrieval target.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides an image retrieval method based on a heterogeneous bilinear attention network framework.
Technical scheme
An image retrieval method based on a heterogeneous bilinear attention network is characterized by comprising the following steps:
step 1: a characteristic diagram is obtained after a picture passes through a hour-glass network
Figure RE-GDA0002218633990000011
Meanwhile, another characteristic diagram is obtained through the Incep-Resnet-v 2 network
Figure RE-GDA0002218633990000012
Step 2: the two characteristic graphs are respectively subjected to integral average pooling to obtain two vectors
Figure RE-GDA0002218633990000021
And
Figure RE-GDA0002218633990000022
v a =GlobalAveragePooling(V a ), (1)
v l =GlobalAveragePooling(V l ). (2)
and 3, step 3: v is to be a And v l Splicing the two multi-layer perceptrons into a vector, and then calculating the attention weight of the channel-wise of the feature map of each branch through the two multi-layer perceptrons in parallel; the specific calculation formula is as follows:
Figure RE-GDA0002218633990000023
Figure RE-GDA0002218633990000024
herein, the
Figure RE-GDA0002218633990000025
Are all linear transformation matrices; k is a radical of a And k l Is the dimension of the projection, and,
Figure RE-GDA0002218633990000026
c ═ C for splicing operation a +C l ,α a For the channel-wise attention weight, α, assigned to the attribute classification leg l Channel-wise attention weights assigned to the key zone location legs; sigmoid and Relu are commonly used activation functions;
and 4, step 4: obtaining two weighted feature maps
Figure RE-GDA0002218633990000027
Then resample them to the same spatial size W × H;
and 5: if the feature vector at the position of the feature map (i, j) obtained in step 4 is given
Figure RE-GDA0002218633990000028
Dividing x by using the count sketch function Ψ ij Projected to the destination vector
Figure RE-GDA0002218633990000029
Here, a symbol vector is also used
Figure RE-GDA00022186339900000210
And a mapping vector
Figure RE-GDA00022186339900000211
Each value in s is randomly selected from { +1, -1} with the same probability; each value in P is selected from {1, …, d } with uniformly distributed probability; count
The sketch function Ψ is defined as follows:
y ij =Ψ(x ij ,s,p)=[v 1 ,…,v d ], (5)
v herein t =∑ l s[l]·x ij [l]So that p [ l ] is]T; if two vectors are combined (A), (B)
Figure RE-GDA00022186339900000212
And
Figure RE-GDA00022186339900000213
) As an input to the count sketch function, then this count sketch function can be written as a convolution of two count sketch functions with a single vector as an input:
Figure RE-GDA00022186339900000214
here, ", indicates an outer product operation,", indicates a convolution operation; finally, bilinear features are obtained through time domain and frequency domain conversion:
Figure RE-GDA0002218633990000031
wherein, ° is the multiplication of the set of elements;
and 6: during training, performing ID classification training on the finally obtained bilinear features; during testing, the regularized bilinear features obtained by the query image and the image in the database are calculated, and then the Euclidean distance between the query image and the image in the database is calculated, so that the final top-k result can be obtained.
Advantageous effects
The image retrieval method based on the heterogeneous bilinear attention network provided by the invention obtains a robust bilinear feature of an image, and for the content-based image retrieval task, the bilinear feature not only solves the problem of 'where to look', namely finding out a key area in the image, but also solves the problem of 'how to describe' the key area, and gives the attribute feature of each key area.
Detailed Description
The technical scheme of the invention is as follows: the network has two special branches, one providing critical area location information and the other providing attribute level descriptions. The output results of the two branches are integrated into a representation at image level by a bilinear module based on attention mechanism. Two auxiliary tasks are used to pre-train the two branches to ensure that they have the ability to achieve critical region localization and attribute description. The first branch adopts a hour-glass network to realize the task of detecting a key area; the second branch uses the inclusion-Resnet-v 2 network to implement attribute prediction. The invention uses the attention mechanism of channel-wise driven by two branches to weight the channels of the output representation of the two branches, and then integrates the weighted representation into the final representation of the image level by using a compact bilinear pooling method. And then calculating Euclidean distances among the representations of different images and sequencing to obtain a final retrieval result.
The specific process is as follows:
1. attribute classification branch pre-training
Given a picture, the size of the picture is adjusted to 299 x 299 size by using bilinear interpolation. Inputting the adjusted pictures into an increment-Resnet-v 2 network, removing the last two layers of the increment-Resnet-v 2 network, namely an average pooling layer and a full connection layer, to obtain a feature map of network output, wherein the size of the feature map is 1536 × 8 × 8, and the average pooling layer and the full connection layer are added behind the network again, which is different from the original network in that the dimension output by a new full connection layer is the number of attributes to be classified. In order to solve the problem of data imbalance, the invention selects attributes with equivalent quantity to predict, uses a binary cross entropy loss function to evaluate the performance of a multi-label attribute prediction task, and uses a random gradient descent method to optimize and update parameters.
2. Critical zone location branch pre-training
Given a picture, the picture size is adjusted to 256 × 256 using bilinear interpolation. Inputting the adjusted picture into a hour-glass network, setting the number of landmark to be 8, namely outputting coordinates of 8 key points, generating a thermodynamic diagram (heatmap) of 64 multiplied by 64 according to the coordinates of the 8 key points, and calculating a normalized average error according to the thermodynamic diagram (64 multiplied by 64) corresponding to the ground route. The Adam optimizer is used to update the parameters during training.
3. Data enhancement
The same picture is randomly and simultaneously turned left and right, and randomly and simultaneously rotated for a certain angle theta epsilon [ -30 degrees, +30 degrees ], the sizes of the picture are respectively adjusted to 299 multiplied by 299 and 256 multiplied by 256 by using a bilinear interpolation method, and finally two tensors (299 multiplied by 3 and 256 multiplied by 3) are obtained through normalization processing. Because the tensor 256 × 256 × 3 is used as the input of the key area positioning branch, the coordinates of the corresponding key points in the original image are also changed correspondingly. When the image is turned left and right, the coordinates of the left point of the image are changed into the coordinates of the corresponding right point, and the coordinates of the right point are changed into the coordinates of the corresponding left point. The coordinates of the key points are adjusted accordingly during random rotation and image resizing.
4. Obtaining branch features of an image
Inputting the tensor (299 x 3) obtained after data preprocessing into the inclusion-Resnet-v 2 network to obtain the characteristic diagram
Figure RE-GDA0002218633990000041
Inputting another tensor (256 multiplied by 3) of the same image into the hour-glass network to obtain another feature map
Figure RE-GDA0002218633990000051
5. Feature optimization based on channel-wise attention mechanism
After the feature maps output by the two branches are subjected to integral average pooling, two global vectors are respectively obtained
Figure RE-GDA0002218633990000052
And
Figure RE-GDA0002218633990000053
the two vectors are then spliced together
Figure RE-GDA0002218633990000054
Then the spliced vector passes through a full connection layer and a Relu layer to obtain a 512-dimensional hidden layer, and passes through a full connection layer and a Sigmoid layer to obtain a weight vector of a channel of the feature map
Figure RE-GDA0002218633990000055
And
Figure RE-GDA0002218633990000056
distributing weight to the channels of the two feature maps, namely multiplying the weight value of each bit of the weight vector by the corresponding channel to obtain the optimized features
Figure RE-GDA0002218633990000057
Figure RE-GDA0002218633990000058
And
Figure RE-GDA0002218633990000059
6. dual-feature compact bilinear pooling
Optimizing the characteristics of two branches
Figure RE-GDA00022186339900000510
And
Figure RE-GDA00022186339900000511
after average pooling, adjusting to the same space size (8 × 8), and taking the vector at the (i, j) position of the two branch feature maps
Figure RE-GDA00022186339900000512
And
Figure RE-GDA00022186339900000513
projecting the two vectors onto a d-dimensional vector through a count sketch function respectivelyExperiments prove that the effect is better when d is 16 k. The specific process is as follows: for the
Figure RE-GDA00022186339900000514
Establishing two vectors
Figure RE-GDA00022186339900000515
And
Figure RE-GDA00022186339900000516
(Vector)
Figure RE-GDA00022186339900000517
are initialized with the same probability of random pick from { +1, -1}, and
Figure RE-GDA00022186339900000518
each value in (1), (…), (16 k) is initialized by random pick with the same probability. Handle
Figure RE-GDA00022186339900000519
And
Figure RE-GDA00022186339900000520
as input to the count sketch function. In the count sketch function, a vector y is initialized first 1 =[0,…,0] 16k Then for y 1 The value of the ith dimension in (b) can be obtained by the following formula:
Figure RE-GDA00022186339900000521
so that
Figure RE-GDA00022186339900000522
Finally obtaining a projection result y 1 . For input by the same theory
Figure RE-GDA00022186339900000523
And
Figure RE-GDA00022186339900000524
count the sketch function outputs a projection result y 2 . Finally for y 1 And y 2 Performing time domain and frequency domain conversion to obtain a vector F ij =FFT -1 (FFT(y 1 )°FFT(y 2 )). Mixing 8X 8F ij Is integrated into a tensor
Figure RE-GDA00022186339900000525
Then, the F is processed by summation and pooling, that is, each channel in the F is summed to obtain a vector
Figure RE-GDA00022186339900000526
This vector f is then signed with the square root and L 2 And (5) carrying out norm processing to finally obtain the bilinear characteristic of the picture. And then, reducing the dimension of the bilinear feature, and respectively passing the bilinear feature through a full connection layer and a batch normalization layer to obtain the final compact bilinear feature.
7. Model training
And the compact bilinear characteristic is taken as an input, and a full connection layer is taken as a framework, so that the ID classification task is realized. One ID is a picture containing all positive samples (i.e., the picture contains the same object), and for one ID, the other IDs are all negative samples. I.e. each instance is considered to be a separate class. The dimension of the fully connected layer output is equivalent to all ID numbers. The main loss function uses a cross-entropy loss function:
Figure RE-GDA0002218633990000061
here, x is a prediction vector, and gt is an index corresponding to the real tag. And two auxiliary loss functions, namely a binary cross entropy loss function for multi-label attribute prediction training on the attribute classification branch and a normalized average error function for key point detection on the key region positioning branch. The three losses are assigned different weights, resulting in a total loss. The optimizer selects an Adam optimizer to calculate the gradient and perform back propagation. The learning rate needs to be set when updating the parameters, the initial learning rate is set to 0.0001,then every 5 epochs, the learning rate decays to half of the original. The number of pictures for one iteration is set to 20 pictures. The loss plateaus after 35 epochs. To avoid the over-fitting training, a constraint term L is added to the loss term 2 And (5) normalizing.
8. Model application
The picture data processing does not require data enhancement here, only the image needs to be adjusted to 299 × 299 and 256 × 256 sizes, and normalization can be used as input of the attribute classification branch and the key region positioning branch. The parameters of the whole network model are fixed, and only image data are input and are subjected to forward reasoning. Taking the compact bilinear feature finally obtained by the model as the feature of the image, so that the feature vectors of all the images can be obtained, and the vectors are processed by L 2 After norm processing, the feature vectors can be mapped to a spherical surface, and the feature vectors can be used as the basis of measurement. Giving a query image, and obtaining a feature vector F of the query image after model reasoning q Obtaining all characteristic vectors { F ] of the database images after model reasoning of the database images 1 ,…,F m And calculating a feature vector F of the query image q With all feature vectors { F } 1 ,…,F m Euclidean distance of }: d is a radical of i =||F q -F i || 2 I is 1, …, m, to yield D ═ D 1 ,…,d m ]And all the values in D are reordered from top to bottom, top-k is to take the top k results, and the corresponding database image is considered to be the correct retrieved result. And if the k database images predicted by the model have the database images which are really corresponding to the retrieval images, the retrieval is considered to be successful. For example, the result of top-5 is [ d ] 10 ,d 35 ,d 60 ,d 61 ,d 26 ]If the database image to be searched for by the query image is the image No. 61 in the database, the search is successful.

Claims (1)

1. An image retrieval method based on a heterogeneous bilinear attention network is characterized by comprising the following steps:
step 1: a picture is processed through a hour-glass network to obtain a feature map
Figure FDA0003729766240000011
Meanwhile, another characteristic diagram is obtained through the Incep-Resnet-v 2 network
Figure FDA0003729766240000012
Step 2: the two characteristic graphs are respectively subjected to integral average pooling to obtain two vectors
Figure FDA0003729766240000013
And
Figure FDA0003729766240000014
v a =GlobalAveragePooling(V a ), (1)
v l =GlobalAveragePooling(V l ). (2)
and step 3: v is to be a And v l Splicing the two multi-layer perceptrons into a vector, and then calculating the attention weight of the channel-wise of the feature map of each branch through the two multi-layer perceptrons in parallel; the specific calculation formula is as follows:
Figure FDA0003729766240000015
Figure FDA0003729766240000016
herein, the
Figure FDA0003729766240000017
Are all linear transformation matrices; k is a radical of a And k l Is the dimension of the projection, and,
Figure FDA0003729766240000018
c ═ C for splicing operation a +C l ,α a For the channel-wise attention weight, α, assigned to the attribute classification leg l Channel-wise attention weights assigned to the key zone location legs; sigmoid and Relu are commonly used activation functions;
and 4, step 4: obtaining two weighted feature maps
Figure FDA0003729766240000019
Then resample them to the same spatial size W × H;
and 5: giving the feature vector at the position of the feature map (i, j) obtained in the step 4
Figure FDA00037297662400000110
Dividing x by using the count sketch function Ψ ij Projected to the destination vector
Figure FDA00037297662400000111
Here, a symbol vector is also used
Figure FDA00037297662400000112
And a mapping vector
Figure FDA00037297662400000113
Each value in s is randomly selected from { +1, -1} with the same probability; each value in P is selected from { 1.,. d } with uniformly distributed probability; the count sketch function Ψ is defined as follows:
y ij =Ψ(x ij ,s,p)=[v 1 ,...,v d ], (5)
v herein t =∑ l s[l]·x ij [l]So that p [ l ] is]T; two vectors are combined
Figure FDA00037297662400000114
And
Figure FDA00037297662400000115
is input to the count sketch function, then this count sketch function writes the convolution of two count sketch functions that have a single vector as input:
Figure FDA0003729766240000021
here, "indicates an outer product operation," indicates a convolution operation; finally, bilinear features are obtained through time domain and frequency domain conversion:
Figure FDA0003729766240000022
wherein the content of the first and second substances,
Figure FDA0003729766240000023
is the multiplication of element sets;
step 6: during training, performing ID classification training on the finally obtained bilinear features; during testing, calculating regularized bilinear features obtained by the query image and the images in the database, and then calculating the Euclidean distance between the query image and the images in the database to obtain a final top-k result.
CN201910692241.XA 2019-07-30 2019-07-30 Image retrieval method based on heterogeneous bilinear attention network Active CN110532409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910692241.XA CN110532409B (en) 2019-07-30 2019-07-30 Image retrieval method based on heterogeneous bilinear attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910692241.XA CN110532409B (en) 2019-07-30 2019-07-30 Image retrieval method based on heterogeneous bilinear attention network

Publications (2)

Publication Number Publication Date
CN110532409A CN110532409A (en) 2019-12-03
CN110532409B true CN110532409B (en) 2022-09-27

Family

ID=68661312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910692241.XA Active CN110532409B (en) 2019-07-30 2019-07-30 Image retrieval method based on heterogeneous bilinear attention network

Country Status (1)

Country Link
CN (1) CN110532409B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011362A (en) * 2021-03-29 2021-06-22 吉林大学 Fine-grained fundus image grading algorithm based on bilinear pooling and attention mechanism
CN115754108B (en) * 2022-11-23 2023-06-09 福建省杭氟电子材料有限公司 Acidity determination system and method for electronic grade hexafluorobutadiene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291945A (en) * 2017-07-12 2017-10-24 上海交通大学 The high-precision image of clothing search method and system of view-based access control model attention model
US20170308770A1 (en) * 2016-04-26 2017-10-26 Xerox Corporation End-to-end saliency mapping via probability distribution prediction
CN109117437A (en) * 2017-06-23 2019-01-01 李峰 A kind of image feature extraction method towards image of clothing retrieval

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170308770A1 (en) * 2016-04-26 2017-10-26 Xerox Corporation End-to-end saliency mapping via probability distribution prediction
CN109117437A (en) * 2017-06-23 2019-01-01 李峰 A kind of image feature extraction method towards image of clothing retrieval
CN107291945A (en) * 2017-07-12 2017-10-24 上海交通大学 The high-precision image of clothing search method and system of view-based access control model attention model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Cascade Multi-View Hourglass Model for Robust 3D Face Alignment;Jiankang Deng等;《2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)》;20180607;第399-403页 *
基于CNN的人体姿态识别;周义凯等;《计算机与现代化》;江西省计算机学会;江西省计算技术研究所;20190228(第2期);第49-54页 *
江西省计算技术研究所,2019,(第2期),第49-54页. *

Also Published As

Publication number Publication date
CN110532409A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN111489358B (en) Three-dimensional point cloud semantic segmentation method based on deep learning
CN112488210A (en) Three-dimensional point cloud automatic classification method based on graph convolution neural network
CN111625667A (en) Three-dimensional model cross-domain retrieval method and system based on complex background image
CN112907602B (en) Three-dimensional scene point cloud segmentation method based on improved K-nearest neighbor algorithm
CN108875076B (en) Rapid trademark image retrieval method based on Attention mechanism and convolutional neural network
CN111695494A (en) Three-dimensional point cloud data classification method based on multi-view convolution pooling
CN112784782B (en) Three-dimensional object identification method based on multi-view double-attention network
Chen et al. DDGCN: graph convolution network based on direction and distance for point cloud learning
CN113095251B (en) Human body posture estimation method and system
CN112364747B (en) Target detection method under limited sample
CN110532409B (en) Image retrieval method based on heterogeneous bilinear attention network
CN115222998B (en) Image classification method
Lv et al. ESSINet: Efficient spatial–spectral interaction network for hyperspectral image classification
CN112489119A (en) Monocular vision positioning method for enhancing reliability
Sahu et al. Dynamic routing using inter capsule routing protocol between capsules
Woźniak et al. Basic concept of cuckoo search algorithm for 2D images processing with some research results: An idea to apply cuckoo search algorithm in 2d images key-points search
Lei et al. Mesh convolution with continuous filters for 3-D surface parsing
CN112329818B (en) Hyperspectral image non-supervision classification method based on graph convolution network embedded characterization
Kang et al. Region-enhanced feature learning for scene semantic segmentation
CN114723973A (en) Image feature matching method and device for large-scale change robustness
JP2023013293A (en) Training data generation apparatus, learning model generation apparatus, and method of generating training data
CN113011506A (en) Texture image classification method based on depth re-fractal spectrum network
CN109308936B (en) Grain crop production area identification method, grain crop production area identification device and terminal identification equipment
Zhang et al. Unsupervised learning of ALS point clouds for 3-D terrain scene clustering
CN112818982A (en) Agricultural pest image detection method based on depth feature autocorrelation activation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant