CN110955745A - Text hash retrieval method based on deep learning - Google Patents

Text hash retrieval method based on deep learning Download PDF

Info

Publication number
CN110955745A
CN110955745A CN201910983514.6A CN201910983514A CN110955745A CN 110955745 A CN110955745 A CN 110955745A CN 201910983514 A CN201910983514 A CN 201910983514A CN 110955745 A CN110955745 A CN 110955745A
Authority
CN
China
Prior art keywords
text
hash
data
trained
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910983514.6A
Other languages
Chinese (zh)
Other versions
CN110955745B (en
Inventor
寿震宇
钱江波
辛宇
谢锡炯
陈海明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Beluga Information Technology Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201910983514.6A priority Critical patent/CN110955745B/en
Publication of CN110955745A publication Critical patent/CN110955745A/en
Application granted granted Critical
Publication of CN110955745B publication Critical patent/CN110955745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • G06F16/325Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a text Hash retrieval method based on deep learning, which is characterized in that a bidirectional LSTM model is used for extracting semantic codes corresponding to each original vocabulary data of words embedded in a matrix, then a text convolution neural network is connected in parallel after the bidirectional LSTM model, an attention mechanism is added, then a sign function is used for converting output values of a second full connection layer into corresponding Hash codes, a category label is reconstructed by the Hash codes, finally vector data which is closest to the Hamming distance of the retrieved text Hash codes are searched in the text library Hash codes, and the Hash retrieval process of the retrieved text data is completed, the method has the advantages that the learning capacity of the Hash model to short texts is higher, the added attention mechanism can further improve the expression capacity of features, the category label is reconstructed by the Hash codes by a classification layer, so that when the Hash model learns binary codes, the tag information can be used more finely, so that the retrieval accuracy is higher.

Description

Text hash retrieval method based on deep learning
Technical Field
The invention relates to a text hash retrieval method, in particular to a text hash retrieval method based on deep learning.
Background
With the increase of data scale and dimensionality, the cost of semantic retrieval is increased sharply, and text hashing has received wide attention as an important way for realizing efficient semantic retrieval; however, most text hash algorithms map explicit features or keyword features of text to binary codes directly by using a machine learning mechanism, and these features cannot effectively guarantee semantic similarity between texts, so that the resulting code retrieval efficiency is low.
Disclosure of Invention
The invention aims to solve the technical problem of providing a text hash retrieval method based on deep learning, which has higher retrieval precision and efficiency.
The technical scheme adopted by the invention for solving the technical problems is as follows: a text hash retrieval method based on deep learning comprises the following steps:
①, acquiring to-be-retrieved text base data consisting of S original vocabulary data, and performing cleaning and word segmentation pretreatment on the original vocabulary data to obtain pretreated text base data;
② define the hash model to be trained as follows:
② -1, performing word embedding processing on the preprocessed text database data to obtain a word embedding matrix;
② -2, constructing a bidirectional LSTM model, and inputting words embedded into the matrix into the bidirectional LSTM model to obtain semantic codes corresponding to each original vocabulary data;
② -3 extracting n-gram features of each semantic code by using a text convolution neural network;
② -4 extracting attention features of each semantic code using an attention mechanism;
② -5, merging the n-gram feature and the attention feature of each semantic code by adopting a front-back splicing mode to obtain the comprehensive feature of each semantic code;
② -6, setting two first full connection layers using relu function as activation function, and converting the comprehensive features of each semantic code into higher-order features through the first full connection layers;
② -7, setting a second full-link layer using a tanh function as an activation function, inputting the higher-order feature of each semantic code into the second full-link layer, and converting the output value of the second full-link layer into a corresponding hash code by using a sign function;
② -8, setting a classification layer to classify the hash codes corresponding to the output value of the second full connection layer;
③, scrambling the preprocessed text base data to obtain scrambled text base data, averagely dividing the scrambled text base data into P batches of to-be-trained text base data, wherein P is more than 1000, and training the to-be-trained Hash model by using the P batches of to-be-trained text base data according to a loss function defined by a similarity preserving principle to obtain a trained Hash model;
④, encoding the preprocessed text library data by using the trained hash model to obtain corresponding text library hash codes;
⑤, giving retrieval text data, performing cleaning and word segmentation pretreatment on the retrieval text data to obtain preprocessed retrieval text data, and encoding the preprocessed retrieval text data by using a trained Hash model to obtain a corresponding retrieval text Hash code;
⑥, searching the vector data which is nearest to the Hamming distance of the search text hash code in the text library hash code, and using the text which is composed of the original vocabulary data in the text library data to be searched and corresponding to the vector data as the final search result, completing the hash search process of the search text data.
The hash model to be trained in step ③ is trained, and the specific process of obtaining the trained hash model is as follows:
③ -1 sets the maximum number of iterations and defines the loss function according to the similarity-preserving principle as follows:
Figure BDA0002235981490000021
wherein i is more than or equal to 1 and less than or equal to N,
Figure BDA0002235981490000022
j is more than or equal to 1 and less than or equal to M, N is S/P, M is the bit number of the Hash code corresponding to the output value of the second full connection layer, yiFor the real label corresponding to the ith vocabulary data in the text base data to be trained of each batch,
Figure BDA0002235981490000023
the output value y of the classification layer corresponding to the ith vocabulary data in each batch of the text base data to be trainedijIs yiThe value of the j-th bit of (c),
Figure BDA0002235981490000024
is composed of
Figure BDA0002235981490000025
The value of the j-th bit of (a)iExpressing the output value of a second full connection layer corresponding to the ith vocabulary data in each batch of the text base data to be trained, and W expressing the trainable parameters of a classification layer, mean (a)i) Represents a pair ofiIs averaged by the element of (a) < lambda >1Is a hyperparameter of a second term in the predetermined loss function, lambda2Is a hyperparameter of a third term in the predetermined loss function, lambda3Represented as a hyperparameter of the fourth term in the pre-set loss function, | | … | | survival2Is a 2-norm symbol;
③ -2, according to the loss function, using an Adam optimization algorithm to perform iterative optimization on the model to be trained until reaching the set maximum iteration number, stopping the iteration process, and obtaining the trained Hash model.
Lambda in the step ③ -11=0.1,λ2=0.1,λ3=0.1。
The maximum number of iterations set in step ③ -1 is 50000.
Compared with the prior art, the method has the advantages that firstly, the two-way LSTM model is used for extracting semantic codes corresponding to each original vocabulary data of the word embedded matrix, then, in order to enhance the learning capacity of the hash model to short texts, the text convolutional neural network is connected in parallel behind the two-way LSTM model, the attention mechanism is increased, the expression capacity of characteristics is further improved, finally, the hidden layer is added between the full connection layer and the classification layer and serves as a hash layer, the hidden layer converts the output value of the second full connection layer into corresponding hash codes by using a sign function, the classification layer reconstructs class labels by using the hash codes, so that the hash model can utilize label information more finely while learning the binary codes, finally, vector data closest to the hamming distance of the hash codes of the retrieval text is searched in the hash codes of the text base, and the text formed by the original vocabulary data in the text base data to be retrieved corresponding to the vector data serves as a final retrieval node And finally, completing the Hash retrieval process of the retrieved text data, and displaying that the query accuracy rate of the text Hash retrieval method is improved by adopting a comparison experiment on the short text data set and the common text data set.
Detailed Description
The present invention is described in further detail below.
A text hash retrieval method based on deep learning comprises the following steps:
①, obtaining the database data to be searched composed of S original vocabulary data, cleaning the original vocabulary data and pre-processing the words to obtain the pre-processed database data.
② define the hash model to be trained as follows:
② -1, performing word embedding processing on the preprocessed text database data to obtain a word embedding matrix;
② -2, constructing a bidirectional LSTM model, and inputting words embedded into the matrix into the bidirectional LSTM model to obtain semantic codes corresponding to each original vocabulary data;
② -3 extracting n-gram features of each semantic code by using a text convolution neural network;
② -4 extracting attention features of each semantic code using an attention mechanism;
② -5, merging the n-gram feature and the attention feature of each semantic code by adopting a front-back splicing mode to obtain the comprehensive feature of each semantic code;
② -6, setting two first full connection layers using relu function as activation function, and converting the comprehensive features of each semantic code into higher-order features through the first full connection layers;
② -7, setting a second full-link layer using a tanh function as an activation function, inputting the higher-order feature of each semantic code into the second full-link layer, and converting the output value of the second full-link layer into a corresponding hash code by using a sign function;
② -8 sets a classification layer to classify the hash codes corresponding to the output values of the second fully-connected layer.
③, scrambling the preprocessed text base data to obtain scrambled text base data, averagely dividing the scrambled text base data into P batches of to-be-trained text base data, wherein P is more than 1000, training the to-be-trained hash model by using the P batches of to-be-trained text base data according to the loss function defined by the similarity preserving principle to obtain the trained hash model, and the specific process is as follows:
③ -1, setting the maximum iteration number to 50000, and defining the loss function according to the similarity-preserving principle as follows:
Figure BDA0002235981490000041
wherein i is more than or equal to 1 and less than or equal to N,
Figure BDA0002235981490000042
j is more than or equal to 1 and less than or equal to M, N is S/P, M is the bit number of the Hash code corresponding to the output value of the second full connection layer, yiFor the real label corresponding to the ith vocabulary data in the text base data to be trained of each batch,
Figure BDA0002235981490000043
the output value y of the classification layer corresponding to the ith vocabulary data in each batch of the text base data to be trainedijIs yiThe value of the j-th bit of (c),
Figure BDA0002235981490000044
is composed of
Figure BDA0002235981490000045
The value of the j-th bit of (a)iExpressing the output value of a second full connection layer corresponding to the ith vocabulary data in each batch of the text base data to be trained, and W expressing the trainable parameters of a classification layer, mean (a)i) Represents a pair ofiIs averaged by the element of (a) < lambda >1Is a hyperparameter of a second term in the predetermined loss function, lambda2Is a hyperparameter of a third term in the predetermined loss function, lambda3Represented as a hyperparameter of the fourth term in the predetermined loss function, λ1=0.1,λ2=0.1,λ3=0.1,||…||2Is a 2-norm symbol;
③ -2, according to the loss function, using an Adam optimization algorithm to perform iterative optimization on the model to be trained until reaching the set maximum iteration number, stopping the iteration process, and obtaining the trained Hash model.
④, the preprocessed text library data is encoded by using the trained hash model to obtain the corresponding text library hash code.
⑤, giving retrieval text data, carrying out pre-processing of cleaning and word segmentation on the retrieval text data to obtain pre-processed retrieval text data, and coding the pre-processed retrieval text data by using a trained Hash model to obtain a corresponding retrieval text Hash code.
⑥, searching the vector data which is nearest to the Hamming distance of the search text hash code in the text library hash code, and using the text which is composed of the original vocabulary data in the text library data to be searched and corresponding to the vector data as the final search result, completing the hash search process of the search text data.
The technical scheme that word embedding processing, bidirectional LSTM model construction, extraction of n-gram features of each semantic code by using a text convolutional neural network, extraction of attention features of each semantic code by using an attention mechanism, and combination of the n-gram features and the attention features of each semantic code by adopting a front-back splicing mode is a common known technical means in the field, and the method is widely applied to the technical field of text retrieval.

Claims (4)

1. A text hash retrieval method based on deep learning is characterized by comprising the following steps:
①, acquiring to-be-retrieved text base data consisting of S original vocabulary data, and performing cleaning and word segmentation pretreatment on the original vocabulary data to obtain pretreated text base data;
② define the hash model to be trained as follows:
② -1, performing word embedding processing on the preprocessed text database data to obtain a word embedding matrix;
② -2, constructing a bidirectional LSTM model, and inputting words embedded into the matrix into the bidirectional LSTM model to obtain semantic codes corresponding to each original vocabulary data;
② -3 extracting n-gram features of each semantic code by using a text convolution neural network;
② -4 extracting attention features of each semantic code using an attention mechanism;
② -5, merging the n-gram feature and the attention feature of each semantic code by adopting a front-back splicing mode to obtain the comprehensive feature of each semantic code;
② -6, setting two first full connection layers using relu function as activation function, and converting the comprehensive features of each semantic code into higher-order features through the first full connection layers;
② -7, setting a second full-link layer using a tanh function as an activation function, inputting the higher-order feature of each semantic code into the second full-link layer, and converting the output value of the second full-link layer into a corresponding hash code by using a sign function;
② -8, setting a classification layer to classify the hash codes corresponding to the output value of the second full connection layer;
③, scrambling the preprocessed text base data to obtain scrambled text base data, averagely dividing the scrambled text base data into P batches of to-be-trained text base data, wherein P is more than 1000, and training the to-be-trained Hash model by using the P batches of to-be-trained text base data according to a loss function defined by a similarity preserving principle to obtain a trained Hash model;
④, encoding the preprocessed text library data by using the trained hash model to obtain corresponding text library hash codes;
⑤, giving retrieval text data, performing cleaning and word segmentation pretreatment on the retrieval text data to obtain preprocessed retrieval text data, and encoding the preprocessed retrieval text data by using a trained Hash model to obtain a corresponding retrieval text Hash code;
⑥, searching the vector data which is nearest to the Hamming distance of the search text hash code in the text library hash code, and using the text which is composed of the original vocabulary data in the text library data to be searched and corresponding to the vector data as the final search result, completing the hash search process of the search text data.
2. The text hash retrieval method based on deep learning of claim 1, wherein the hash model to be trained is trained in step ③, and the specific process of obtaining the trained hash model is as follows:
③ -1 sets the maximum number of iterations and defines the loss function according to the similarity-preserving principle as follows:
Figure FDA0002235981480000021
wherein i is more than or equal to 1 and less than or equal to N,
Figure FDA0002235981480000022
j is more than or equal to 1 and less than or equal to M, N is S/P, M is the bit number of the Hash code corresponding to the output value of the second full connection layer, yiFor the real label corresponding to the ith vocabulary data in the text base data to be trained of each batch,
Figure FDA0002235981480000023
the output value y of the classification layer corresponding to the ith vocabulary data in each batch of the text base data to be trainedijIs yiThe value of the j-th bit of (c),
Figure FDA0002235981480000024
is composed of
Figure FDA0002235981480000025
The value of the j-th bit of (a)iExpressing the output value of a second full connection layer corresponding to the ith vocabulary data in each batch of the text base data to be trained, and W expressing the trainable parameters of a classification layer, mean (a)i) Represents a pair ofiIs averaged by the element of (a) < lambda >1Is a hyperparameter of a second term in the predetermined loss function, lambda2Is a hyperparameter of a third term in the predetermined loss function, lambda3Represented as a hyperparameter of the fourth term in the pre-set loss function, | | … | | survival2Is a 2-norm symbol;
③ -2, according to the loss function, using an Adam optimization algorithm to perform iterative optimization on the model to be trained until reaching the set maximum iteration number, stopping the iteration process, and obtaining the trained Hash model.
3. The text hash search method based on deep learning of claim 2, wherein λ in the step ③ -11=0.1,λ2=0.1,λ3=0.1。
4. The text hash retrieval method based on deep learning of claim 2, wherein the maximum number of iterations set in the step ③ -1 is 50000.
CN201910983514.6A 2019-10-16 2019-10-16 Text hash retrieval method based on deep learning Active CN110955745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910983514.6A CN110955745B (en) 2019-10-16 2019-10-16 Text hash retrieval method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910983514.6A CN110955745B (en) 2019-10-16 2019-10-16 Text hash retrieval method based on deep learning

Publications (2)

Publication Number Publication Date
CN110955745A true CN110955745A (en) 2020-04-03
CN110955745B CN110955745B (en) 2022-04-01

Family

ID=69976421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910983514.6A Active CN110955745B (en) 2019-10-16 2019-10-16 Text hash retrieval method based on deep learning

Country Status (1)

Country Link
CN (1) CN110955745B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488137A (en) * 2020-04-07 2020-08-04 重庆大学 Code searching method based on common attention characterization learning
CN111737406A (en) * 2020-07-28 2020-10-02 腾讯科技(深圳)有限公司 Text retrieval method, device and equipment and training method of text retrieval model
WO2023081483A1 (en) * 2021-11-08 2023-05-11 Oracle International Corporation Wide and deep network for language detection using hash embeddings

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180121799A1 (en) * 2016-11-03 2018-05-03 Salesforce.Com, Inc. Training a Joint Many-Task Neural Network Model using Successive Regularization
US20180165554A1 (en) * 2016-12-09 2018-06-14 The Research Foundation For The State University Of New York Semisupervised autoencoder for sentiment analysis
CN108932314A (en) * 2018-06-21 2018-12-04 南京农业大学 A kind of chrysanthemum image content retrieval method based on the study of depth Hash

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180121799A1 (en) * 2016-11-03 2018-05-03 Salesforce.Com, Inc. Training a Joint Many-Task Neural Network Model using Successive Regularization
US20180165554A1 (en) * 2016-12-09 2018-06-14 The Research Foundation For The State University Of New York Semisupervised autoencoder for sentiment analysis
CN108932314A (en) * 2018-06-21 2018-12-04 南京农业大学 A kind of chrysanthemum image content retrieval method based on the study of depth Hash

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
寿震宇等: "基于机器学习模型的哈希方法研究进展", 《无线通信技术》 *
李元诚等: "开源软件漏洞检测的混合深度学习方法", 《计算机工程与应用》 *
金占勇等: "基于长短时记忆网络的突发灾害事件网络舆情情感识别研究", 《情报科学》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488137A (en) * 2020-04-07 2020-08-04 重庆大学 Code searching method based on common attention characterization learning
CN111488137B (en) * 2020-04-07 2023-04-18 重庆大学 Code searching method based on common attention characterization learning
CN111737406A (en) * 2020-07-28 2020-10-02 腾讯科技(深圳)有限公司 Text retrieval method, device and equipment and training method of text retrieval model
CN111737406B (en) * 2020-07-28 2022-11-29 腾讯科技(深圳)有限公司 Text retrieval method, device and equipment and training method of text retrieval model
WO2023081483A1 (en) * 2021-11-08 2023-05-11 Oracle International Corporation Wide and deep network for language detection using hash embeddings
GB2625485A (en) * 2021-11-08 2024-06-19 Oracle Int Corp Wide and deep network for language detection using hash embeddings

Also Published As

Publication number Publication date
CN110955745B (en) 2022-04-01

Similar Documents

Publication Publication Date Title
CN110298037B (en) Convolutional neural network matching text recognition method based on enhanced attention mechanism
CN110413785B (en) Text automatic classification method based on BERT and feature fusion
CN110275936B (en) Similar legal case retrieval method based on self-coding neural network
CN111694924B (en) Event extraction method and system
CN109299273B (en) Multi-source multi-label text classification method and system based on improved seq2seq model
CN110263325B (en) Chinese word segmentation system
CN113064959B (en) Cross-modal retrieval method based on deep self-supervision sorting Hash
CN110705296A (en) Chinese natural language processing tool system based on machine learning and deep learning
CN110955745B (en) Text hash retrieval method based on deep learning
CN111027595B (en) Double-stage semantic word vector generation method
CN111291188B (en) Intelligent information extraction method and system
CN111143563A (en) Text classification method based on integration of BERT, LSTM and CNN
CN112732864B (en) Document retrieval method based on dense pseudo query vector representation
CN112256727B (en) Database query processing and optimizing method based on artificial intelligence technology
CN114462420A (en) False news detection method based on feature fusion model
CN114780677B (en) Chinese event extraction method based on feature fusion
CN110704664B (en) Hash retrieval method
CN115408495A (en) Social text enhancement method and system based on multi-modal retrieval and keyword extraction
CN110688501B (en) Hash retrieval method of full convolution network based on deep learning
CN116303977A (en) Question-answering method and system based on feature classification
CN113704473A (en) Media false news detection method and system based on long text feature extraction optimization
CN116226357B (en) Document retrieval method under input containing error information
CN115424663B (en) RNA modification site prediction method based on attention bidirectional expression model
CN112052685A (en) End-to-end text entity relationship identification method based on two-dimensional time sequence network
CN115662508B (en) RNA modification site prediction method based on multiscale cross attention model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220728

Address after: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

Address before: 315211, Fenghua Road, Jiangbei District, Zhejiang, Ningbo 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220819

Address after: 530201 room 2239, 22nd floor, No. 1 office building, South plot of Nanning Shimao International Center, No. 17, Pingle Avenue, China (Guangxi) pilot Free Trade Zone, Nanning, Guangxi Zhuang Autonomous Region

Patentee after: GUANGXI ZHONGLIAN ZHIHAO TECHNOLOGY Co.,Ltd.

Address before: Room 2202, 22 / F, Wantong building, No. 3002, Sungang East Road, Sungang street, Luohu District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen dragon totem technology achievement transformation Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221012

Address after: 530201 No. 1010, 10th Floor, Building A, Changlin Center, No. 17, Geyun Road, Nanning District, China (Guangxi) Pilot Free Trade Zone, Nanning City, Guangxi Zhuang Autonomous Region

Patentee after: Guangxi beluga Information Technology Co.,Ltd.

Address before: 530201 room 2239, 22nd floor, No. 1 office building, South plot of Nanning Shimao International Center, No. 17, Pingle Avenue, China (Guangxi) pilot Free Trade Zone, Nanning, Guangxi Zhuang Autonomous Region

Patentee before: GUANGXI ZHONGLIAN ZHIHAO TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right