CN103699660B - A kind of method of large scale network stream data caching write - Google Patents
A kind of method of large scale network stream data caching write Download PDFInfo
- Publication number
- CN103699660B CN103699660B CN201310741116.6A CN201310741116A CN103699660B CN 103699660 B CN103699660 B CN 103699660B CN 201310741116 A CN201310741116 A CN 201310741116A CN 103699660 B CN103699660 B CN 103699660B
- Authority
- CN
- China
- Prior art keywords
- data
- distributed
- write
- loaded
- file system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2219—Large Object storage; Management thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24539—Query rewriting; Transformation using cached or materialised query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/256—Integrating or interfacing systems involving database management systems in federated or virtual databases
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides the method for a kind of large scale network stream data caching write to include: after the data encapsulation that client will gather, be sent to server end;The data received are resolved by server end, confirm the source of data, and by data uniform format;By the data after uniform format, according to the buffer zone of its different internal memory of source write;The data in memory cache regions different to write formulate different cache policies, and from internal memory, the data of satisfied strategy trigger condition are write local file system;The data of write local file system are loaded in distributed file system or distributed data base according to upper layer application demand;For being loaded into the data of distributed file system or distributed data base, periodically small data is merged into long data block.The present invention utilizes multi-level buffer mechanism to cope with the write of the large scale network data that different pieces of information is originated, inflow velocity is different.
Description
Technical field
The present invention relates to networking technology area, be specifically related to a kind of large scale network stream data and delay
The method depositing write.
Background technology
In recent years, along with the fast development of the Internet, quickly having increased into many row of data
The opportunities and challenges of industry facing.Under current network environment, mass data source is real-time
Continual, it is desirable to the response time to user is also real-time.These data are with the shape of streaming
Formula is collected, calculate and inquire about.Such as Network anomaly detection system, by gather network packet,
The data such as network log, are analyzed, and ensure to return analysis result in the range of certain time,
The high availability of Logistics networks.The feature of this system is: each moment has of all kinds
The network data of magnanimity flows into system, and inflow velocity is different, and data structure complexity is various (to be included
Binary file, text, compressed file etc.), Network anomaly detection simply one application.
This type of is applied, needs bottom storage system to support: to the data flowed into unify lattice
Formula stores, and provides unified interface, convenient search to upper layer application, and has real-time necessarily
Requirement.
For big data trend now, a collection of big data processing platform (DPP), Application comparison are emerged in large numbers
Include that the Hadoop distributed system using MapReduce parallel processing framework processes widely
Framework, include again in this Open Source Framework HDFS (Hadoop Distributed File System),
The sub-projects such as Hbase (Hadoop Database), Hive (Tool for Data Warehouse).HDFS
Be designed to store magnanimity data (typically PB is the highest), application program read literary composition
The mode of part is assumed that streaming reads, and HDFS has done many excellent in the performance that streaming reads
Change;HBase is a distributed non-relational database system, and HBase builds at HDFS
On distributed memory system, say, that HDFS is that HBase provides dividing of high reliability
The storage of cloth bottom is supported, Hadoop MapReduce is that HBase provides high performance distribution
Formula computing engines;Hive is a Tool for Data Warehouse based on Hadoop bottom, can will tie
The data file of structure is mapped as a database table, and provides high-rise SQL query function,
It is very suitable for operating for the statistical analysis of massive structured data;It addition, current Application comparison
Distributive type processing platform has the Storm of S4, Twitter of Yahoo widely, both
Be all freely increase income, fault-tolerant real time computation system distributed, high.
But the batch mode of Hadoop framework can not meet the requirement of calculating in real time, system processes
Speed slows down, and is not suitable for data and flows directly into;Hbase Yu Hive broadly falls into distributed data base,
But from the different pieces of information source of network flow into system data different formats, the most treated cannot be straight
Connect and be supplied to distributed data base use;The Storm of S4, Twitter of Yahoo is this kind of specially
For distributive type processing platform problematically, they are more to provide a kind of streaming number
According to computing capability, the data of all arrival are directly entered in internal memory calculating after treatment, and
The data flowed into are not carried out persistent storage, it is impossible to meet the demand of application.
Summary of the invention
(1) solve the technical problem that
For the deficiencies in the prior art, the present invention provides a kind of large scale network stream data caching
Wiring method, it is possible to utilize multi-level buffer mechanism reply different pieces of information source, inflow velocity different
The write of large scale network data.
(2) technical scheme
In order to reach object above, the present invention is achieved by the following technical programs:
A kind of method of large scale network stream data caching write, the method includes:
After the data encapsulation that client will gather, it is sent to server end;
The data received are resolved by server end, confirm the source of data, and by data
Uniform format;
By the data after uniform format, according to the buffer zone of its different internal memory of source write;
The data in memory cache regions different to write formulate different cache policies, and will meet
The data of strategy trigger condition write local file system from internal memory;
The data of write local file system are loaded into distributed document according to upper layer application demand
In system or distributed data base;
For being loaded into the data of distributed file system or distributed data base, periodically by little
Data are merged into big data.
Wherein, after described data uniform format, should be Key-Value form or relational data shape
Formula.
It is preferred that the method farther includes: for the number in distributed data base to be loaded into
According to, determine whether whether it is relational data, if relational data, then load it
To distributed relation database, if not relational data, then it is loaded into distributed
No-SQL data base.
It is preferred that the method also includes: for being loaded into the data of distributed data base, will close
The storage format of the big data after and is converted to the special ranks optimized for distributed data base
Storage format.
A kind of system of large scale network stream data caching write, this system includes:
Data transmission blocks, after the data encapsulation that will gather, is sent to server end;
Data formatting module, for resolving the data received, confirms coming of data
Source, and by data uniform format;
Data cache module, for by the data after uniform format, according to its source write difference
The buffer zone of internal memory;
Data persistence module, for different the delaying of data customization of buffer zones different to write
Deposit strategy, and the data of satisfied strategy trigger condition are write local file system from internal memory;
Data load-on module, for the data by writing local file system according to upper layer application need
Ask and be loaded in distributed file system or distributed data base;
The regular module of data, for being loaded into distributed file system or distributed data base
Data, periodically small data is merged into big data.
Wherein, described data formatting module, after data uniform format, the form of its data
Should be Key-Value form or relational data form.
It is preferred that described data load-on module, determine whether to be loaded in distributed data base
Data whether be relational data, if relational data, be then loaded into distributed pass
It is type data base, if not relational data, is then loaded into distributed No-SQL data
Storehouse.
It is preferred that the regular module of described data, after distributed data base merging further
The storage format of big data, is converted into the special ranks storage optimized for distributed data base
Form.
(3) beneficial effect
The present invention at least has the advantages that:
The method that the present invention provides utilizes multi-level buffer mechanism to cache data, works as service
After device receives data, the form of data is unified, and come according to the difference of data
Different cache policy is formulated in source, it is possible to different big of reply different pieces of information source, inflow velocity
The write of scale network data, is merged into long data block by scrappy small data, improves data
Processing speed, reduces data space, reduces data management cost, meet at big data
Reason demand.
The method that the present invention provides, by data uniform format, data form is after reunification
Key-Value form or relational data form, facilitate subsequent calculations to directly invoke.
The method that the present invention provides, except by the decimal in the data being loaded into distributed data base
According to being merged into long data block, be also converted into by data saving format for distributed data base is special
The ranks storage format that door optimizes, while saving storage area, optimizes search efficiency.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below
The accompanying drawing used required in embodiment or description of the prior art will be briefly described, aobvious and
Easily insight, the accompanying drawing in describing below is only some embodiments of the present invention, for this area
From the point of view of those of ordinary skill, on the premise of not paying creative work, it is also possible to according to these
Figure obtains other accompanying drawing.
Fig. 1 is a kind of large scale network stream data caching write that the embodiment of the present invention provides
The flow chart of method;
Fig. 2 is that a kind of extensive stream data caching that a preferred embodiment of the present invention provides is write
The flow chart of the method entered;
Fig. 3 is the illustration about cache policy.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, to the technical side in the embodiment of the present invention
Case is clearly and completely described, it is clear that described embodiment is only the present invention one
Divide embodiment rather than whole embodiments.Based on the embodiment in the present invention, this area is general
The every other embodiment that logical technical staff is obtained under not making creative work premise,
Broadly fall into the scope of protection of the invention.
See Fig. 1, embodiments provide a kind of large scale network stream data caching write
Method, the step of the method is as follows:
Step 101: after the data encapsulation that client will gather, be sent to server end;
Step 102: the data received are resolved by server end, confirms the source of data,
And by data uniform format;
Step 103: by the data after uniform format, according to the caching of its different internal memory of source write
Region;
Step 104: the data in memory cache regions different to write formulate different cache policies,
And the data of satisfied strategy trigger condition are write local file system from internal memory;
Step 105: the data of write local file system are loaded into point according to upper layer application demand
In cloth file system or distributed data base;
Step 106: for being loaded into the data of distributed file system distributed data base,
According to set strategy, small data is merged into big data.
The method that the embodiment of the present invention provides utilizes multi-level buffer mechanism to cache data,
After server receives data, the form of data is unified, and according to data
Separate sources formulates different cache policies, it is possible to reply different pieces of information is originated, inflow velocity is each
The write of different large scale network data, is merged into long data block by scrappy small data, improves
Data processing speed, reduces data space, reduces data management cost, meets big
Data processing needs.
Below by a more specifically example, a preferred embodiment of the present invention is described
Realizing process, see Fig. 2, the step of the method is as follows:
Step 201: after the data encapsulation that client will gather, be sent to server end.
In this step, client is only responsible for the data simplified package gathered in each moment, indicates
Its Data Source, in the way of POST, is sent to server end by http protocol.
Step 202: the data received are resolved by server end, confirms the source of data,
And by data uniform format.
In this step, it is Key-Value form or relation by the data form after data uniform format
Type data mode, facilitates subsequent calculations to directly invoke.
Step 203: by the data after uniform format, according to the caching of its different internal memory of source write
Region, and the data in write difference memory cache region are formulated different cache policies.
In this step, for different types of data, according to its inflow velocity, data volume and on
The requirement that data are ageing is formulated different cache policies by layer application.As it is shown on figure 3, lift
The example concrete cache policy of explanation, is intended to monitor in real time for upper layer application, belongs to the time quick
Sense class, it is ensured that data ageing, if the most permissible in the shortest time interval
Accumulation mass data, such as the first row in Fig. 3, can accumulate 400M data in one minute, comprehensive on
State several condition and set cache policy as every 1 minute persistence once;And for another kind of data,
Such as the second row in Fig. 3, within 10 minutes, 50M could be added up, and draw according to application demand,
The highest to requirement of real-time, cache policy can be set as often accumulation 128M data persistence once;
Step 204: judge whether data meet the threshold condition of corresponding cache policy, if meeting,
Then going to step 205, if being unsatisfactory for, then going to step 203.
Step 205: by data from internal memory write local file system.
Step 206: according to upper layer application demand, it is determined that data distributed data to be loaded into
In storehouse, the most then go to step 207, if it is not, then go to step 210.
Step 207: for being loaded into the data of distributed data base, it is determined that whether it is relationship type
Data, the most then go to step 208, if it is not, then go to step 209.
Step 208: load data into distributed relational database.
Step 209: load data into distributed No-SQL data base.
Step 210: load data in distributed file system.
Step 211: judge whether the data being loaded in distributed file system meet set strategy
Condition, if it is satisfied, then go to step 214, if be unsatisfactory for, then go to step 210.
In this step, the set policy condition met is needed to make according to the inflow velocity etc. of data
Fixed, as flowing velocity not being data quickly, as the data of 64M within 1 day, could be accumulated,
Periodically it can be carried out small data merged block, big data be processed meeting distributed file system
Demand, depositing the default tile size of file such as HDFS file system is 64M, for deficiency
The data of 64M store with 64M, if file can cause space waste, unit's number less than 64M
According to too much, cause system to run slowly, this method periodically small data merged block is become 64M times
Number, reduces disk fragments, improves memory space utilization rate and promotes recall precision.
Step 212: judge whether the data being loaded in distributed No-SQL data base had both met
The condition of fixed strategy, if it is satisfied, then go to step 215, if be unsatisfactory for, then goes to step
209。
Step 213: judge whether the data being loaded in distributed relation database meet set
The condition of strategy, if it is satisfied, then go to step 216, if be unsatisfactory for, then goes to step
208。
Step 214: the small data meeting condition in distributed file system is merged into big data
Block.
Step 215: the small data meeting condition in distributed No-SQL data base is merged into greatly
Data block.
In this step, except small data is merged into long data block, also data saving format is turned
Change into as RCFile etc. is for the special ranks storage format optimized of distributed data base, at joint
While province storage area, optimize search efficiency.
Step 216: the small data merged block meeting condition in distributed relation database is become big
Data block.
In this step, except small data is merged into long data block, also data saving format is turned
Change into as RCFile etc. is for the special ranks storage format optimized of distributed data base, at joint
While province storage area, optimize search efficiency.
Step 217: terminate data buffer storage write.
Above example only in order to technical scheme to be described, is not intended to limit;Although
With reference to previous embodiment, the present invention is described in detail, those of ordinary skill in the art
It is to be understood that;Technical scheme described in foregoing embodiments still can be modified by it,
Or wherein portion of techniques feature is carried out equivalent;And these amendments or replacement, not
The essence making appropriate technical solution departs from the spirit and scope of various embodiments of the present invention technical scheme.
Claims (8)
1. the method for a large scale network stream data caching write, it is characterised in that the party
Method includes:
After the data encapsulation that client will gather, it is sent to server end;
The data received are resolved by server end, confirm the source of data, and by data
Uniform format;
By the data after uniform format, according to the buffer zone of its different internal memory of source write;
The data in memory cache regions different to write formulate different cache policies, and will meet
The data of strategy trigger condition write local file system from internal memory;
The data of write local file system are loaded into distributed document according to upper layer application demand
In system or distributed data base;
For being loaded into the data of distributed file system or distributed data base, periodically by little
Data are merged into long data block.
Method the most according to claim 1, it is characterised in that described data uniform format
After, should be Key-Value form or relational data form.
Method the most according to claim 1, it is characterised in that the method farther includes:
For the data in distributed data base to be loaded into, determine whether whether it is relation
Type data, if relational data, are then loaded into distributed relation database, if not
It is relational data, is then loaded into distributed No-SQL data base.
Method the most according to claim 1, it is characterised in that the method also includes:
For being loaded into the data of distributed data base, the storage format of the big data after merging
Be converted to the special ranks storage format optimized for distributed data base.
5. the system of a large scale network stream data caching write, it is characterised in that this is
System includes:
Data transmission blocks, after the data encapsulation that will gather, is sent to server end;
Data formatting module, for resolving the data received, confirms coming of data
Source, and by data uniform format;
Data cache module, for by the data after uniform format, according to its source write difference
The buffer zone of internal memory;
Data persistence module, for different the delaying of data customization of buffer zones different to write
Deposit strategy, and the data of satisfied strategy trigger condition are write local file system from internal memory;
Data load-on module, for the data by writing local file system according to upper layer application need
Ask and be loaded in distributed file system or distributed data base;
The regular module of data, for being loaded into distributed file system or distributed data base
Data, periodically small data is merged into big data.
System the most according to claim 5, it is characterised in that described data format mould
Block, after data uniform format, the form of its data should be Key-Value form or relationship type number
According to form.
System the most according to claim 5, it is characterised in that described data load-on module,
Determine whether whether the data being loaded in distributed data base are relational data, if closing
It is type data, is then loaded into distributed relation database, if not relational data,
Then it is loaded into distributed No-SQL data base.
System the most according to claim 5, it is characterised in that the regular module of described data,
The storage format of the big data after merging in distributed data base further, is converted into for dividing
The special ranks storage format optimized of cloth data base.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310741116.6A CN103699660B (en) | 2013-12-26 | 2013-12-26 | A kind of method of large scale network stream data caching write |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310741116.6A CN103699660B (en) | 2013-12-26 | 2013-12-26 | A kind of method of large scale network stream data caching write |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103699660A CN103699660A (en) | 2014-04-02 |
CN103699660B true CN103699660B (en) | 2016-10-12 |
Family
ID=50361188
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310741116.6A Active CN103699660B (en) | 2013-12-26 | 2013-12-26 | A kind of method of large scale network stream data caching write |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103699660B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107111615A (en) * | 2014-05-28 | 2017-08-29 | 北京大学深圳研究生院 | A kind of data cache method and device for distributed memory system |
CN105205084B (en) * | 2014-06-30 | 2018-10-16 | 清华大学 | A kind of data processing method, apparatus and system |
CN104536699B (en) * | 2014-12-11 | 2017-10-17 | 中国科学院声学研究所 | A kind of stream data wiring method based on embedded file system |
CN104598563B (en) * | 2015-01-08 | 2018-09-04 | 北京京东尚科信息技术有限公司 | High concurrent date storage method and device |
CN104657502A (en) * | 2015-03-12 | 2015-05-27 | 浪潮集团有限公司 | System and method for carrying out real-time statistics on mass data based on Hadoop |
CN105512168A (en) * | 2015-11-16 | 2016-04-20 | 天津南大通用数据技术股份有限公司 | Cluster database composite data loading method and apparatus |
CN105975521A (en) * | 2016-04-28 | 2016-09-28 | 乐视控股(北京)有限公司 | Stream data uploading method and device |
CN106484329B (en) * | 2016-09-26 | 2019-01-08 | 浪潮电子信息产业股份有限公司 | A kind of big data transmission integrity guard method based on multistage storage |
CN107943802A (en) * | 2016-10-12 | 2018-04-20 | 北京京东尚科信息技术有限公司 | A kind of log analysis method and system |
CN106909641B (en) * | 2017-02-16 | 2020-09-29 | 青岛高校信息产业股份有限公司 | Real-time data memory |
CN107180082B (en) * | 2017-05-03 | 2020-12-18 | 珠海格力电器股份有限公司 | Data updating system and method based on multi-level cache mechanism |
CN111367979B (en) * | 2020-03-05 | 2021-10-26 | 广州快决测信息科技有限公司 | Data collection method and system |
EP3951610A4 (en) | 2020-03-05 | 2022-06-22 | Guangzhou Quick Decision Information Technology Co., Ltd. | Method and system for automatically generating data determining result |
CN114430351B (en) * | 2022-04-06 | 2022-06-14 | 北京快立方科技有限公司 | Distributed database node secure communication method and system |
CN117827979B (en) * | 2024-03-05 | 2024-05-17 | 数翊科技(北京)有限公司武汉分公司 | Data batch import method and device, electronic equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279429A (en) * | 2013-05-24 | 2013-09-04 | 浪潮电子信息产业股份有限公司 | Application-aware distributed global shared cache partition method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005309653A (en) * | 2004-04-20 | 2005-11-04 | Hitachi Global Storage Technologies Netherlands Bv | Disk device and cache control method |
-
2013
- 2013-12-26 CN CN201310741116.6A patent/CN103699660B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103279429A (en) * | 2013-05-24 | 2013-09-04 | 浪潮电子信息产业股份有限公司 | Application-aware distributed global shared cache partition method |
Non-Patent Citations (1)
Title |
---|
云计算环境下分布式缓存技术的现状与挑战;秦秀磊等;《中国期刊全文数据库 软件学报》;20130131;第24卷(第01期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN103699660A (en) | 2014-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103699660B (en) | A kind of method of large scale network stream data caching write | |
US11582123B2 (en) | Distribution of data packets with non-linear delay | |
CN104820670B (en) | A kind of acquisition of power information big data and storage method | |
CN108628929B (en) | Method and apparatus for intelligent archiving and analysis | |
US11182098B2 (en) | Optimization for real-time, parallel execution of models for extracting high-value information from data streams | |
US10262032B2 (en) | Cache based efficient access scheduling for super scaled stream processing systems | |
WO2017071134A1 (en) | Distributed tracking system | |
Maarala et al. | Low latency analytics for streaming traffic data with Apache Spark | |
Lai et al. | Towards a framework for large-scale multimedia data storage and processing on Hadoop platform | |
Isah et al. | A scalable and robust framework for data stream ingestion | |
US10698935B2 (en) | Optimization for real-time, parallel execution of models for extracting high-value information from data streams | |
CN103491187A (en) | Big data unified analyzing and processing method based on cloud computing | |
CN109710731A (en) | A kind of multidirectional processing system of data flow based on Flink | |
CN106951552A (en) | A kind of user behavior data processing method based on Hadoop | |
Dagade et al. | Big data weather analytics using hadoop | |
CN105630810A (en) | Method for uploading mass small files in distributed storage system | |
Moyne et al. | Big data emergence in semiconductor manufacturing advanced process control | |
Wang et al. | Block storage optimization and parallel data processing and analysis of product big data based on the hadoop platform | |
CN112015952A (en) | Data processing system and method | |
de Souza et al. | Aten: A dispatcher for big data applications in heterogeneous systems | |
Hsu et al. | Effective memory reusability based on user distributions in a cloud architecture to support manufacturing ubiquitous computing | |
Jung et al. | Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis | |
Ma et al. | Banking Comprehensive Risk Management System Based on Big Data Architecture of Hybrid Processing Engines and Databases | |
Ma et al. | Bank big data architecture based on massive parallel processing database | |
Ashwitha et al. | Movie dataset analysis using hadoop-hive |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20180111 Address after: 210042 Xuanwu District, Xuanwu District, Jiangsu, Nanjing, No. 699-22, building 18 Patentee after: CERTUSNET CORP. Address before: 100084 Beijing Haidian District Tsinghua Yuan 100084-82 mailbox Patentee before: Tsinghua University |