CN108009019B - Distributed data positioning example method, client and distributed computing system - Google Patents

Distributed data positioning example method, client and distributed computing system Download PDF

Info

Publication number
CN108009019B
CN108009019B CN201610964886.0A CN201610964886A CN108009019B CN 108009019 B CN108009019 B CN 108009019B CN 201610964886 A CN201610964886 A CN 201610964886A CN 108009019 B CN108009019 B CN 108009019B
Authority
CN
China
Prior art keywords
data
module
processed
serialization result
serialization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610964886.0A
Other languages
Chinese (zh)
Other versions
CN108009019A (en
Inventor
刘成彦
李瑜婷
刘华明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wangsu Science and Technology Co Ltd
Original Assignee
Wangsu Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wangsu Science and Technology Co Ltd filed Critical Wangsu Science and Technology Co Ltd
Priority to CN201610964886.0A priority Critical patent/CN108009019B/en
Publication of CN108009019A publication Critical patent/CN108009019A/en
Application granted granted Critical
Publication of CN108009019B publication Critical patent/CN108009019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for positioning an example by distributed data, a client and a distributed computing system, wherein the distributed computing system comprises a plurality of clients and at least one server, and the method comprises the following steps: a target client acquires data to be processed; dividing the data to be processed into a plurality of parts; obtaining a serialization result of each part; combining the serialization results of all the parts into a serialization result of the data to be processed; and carrying out Hash calculation on the serialization result of the data to be processed, thereby specifying the processing example of the data to be processed. The invention can ensure the accuracy of data transmission to the server side in the stream computing, greatly reduces the time consumed by data serialization, and further reduces the time consumed in positioning the redis instance, thereby improving the operating speed of the system.

Description

Distributed data positioning example method, client and distributed computing system
Technical Field
The embodiment of the invention relates to the technical field of internet, in particular to a method, a client and a distributed computing system for a distributed data positioning instance.
Background
The rapid development of the internet industry brings about explosive growth of data scale, and simultaneously, makes big data present more vivid stream-type characteristics, and the traditional batch processing mode is difficult to meet the requirement of stream-type big data processing on the real-time performance of calculation, so that a more efficient distributed computing system is more and more widely applied.
In the business processing process of the streaming computation, the system uses a redis (high-performance key-value database, namely a key-value database) to perform the computation of data, and temporarily stores the data in the redis. Since massive data enters the computation, multiple redis instances need to be used simultaneously in the computation process. In the calculation process, the client generates a redis key according to the service requirement, calculates a redis instance corresponding to the redis key, and finally sends a corresponding command to the redis instance where the redis key is located for processing. In this process, it needs to be ensured that the same redis key generated in the streaming calculation process needs to enter into the same redis instance, otherwise, the result data is inaccurate. In the calculation process, the same redis key may be generated multiple times, and this will result in the need to repeatedly locate the redis instance corresponding to the key multiple times.
However, in the prior art, a feasible way is not provided for determining a redis instance for processing the same redis key, and when the redis instance is located in a calculation process and the same redis key is processed, a system needs to consume a large amount of resources (such as cpu, time and the like) for repeated serialization, so that the efficiency of locating the redis instance is reduced, and the operating speed of the system is affected.
Disclosure of Invention
The invention aims to overcome the defect that a distributed computing system in the prior art cannot provide a method for processing the same redis key, so that the system consumes a large amount of resources, and provides a method for optimizing the generation of the redis key, so that a distributed data positioning example of the redis example where the redis key is located can be quickly and accurately positioned, a client and the distributed computing system.
The invention solves the technical problems through the following technical scheme:
a method for locating instances of distributed data for use in a distributed computing system, the distributed computing system including a plurality of clients and at least one server, the method comprising:
a target client acquires data to be processed;
dividing the data to be processed into a plurality of parts;
obtaining a serialization result of each part;
combining the serialization result of each part into the serialization result of the data to be processed;
and carrying out Hash calculation on the serialization result of the data to be processed, thereby specifying the processing example of the data to be processed.
In existing streaming computing, many identical redis keys are generated, and the difference between these redis keys may be only a few character strings. If each identical redis key is processed in the same processing mode, the system will do a lot of repeated work, which not only can not process the identical redis keys quickly, but also will occupy a lot of resources of the system.
The invention divides the redis key into a plurality of parts, judges whether the parts exist in the cache, if yes, directly calls the calculation result in the cache, if not, carries out serialization processing on the parts which do not exist and stores the processed result in the cache, thereby providing convenience for processing the parts next time. The invention can effectively improve the system operation speed.
Preferably, the step of obtaining the serialization result of each part comprises:
for each part, judging whether the data size of the part is larger than a preset value;
if so,
determining whether the partial serialization result exists in the memory cache region,
if the serialized result exists in the memory cache region, acquiring the serialized result in the memory cache region;
if the memory cache region does not exist, serializing the part, acquiring a serialization result of the part, and storing the serialization result in the memory cache region;
if not, the user can not select the specific application,
the part is serialized to obtain its serialization result.
Further, the invention optimizes the utilization of the cache. Only a sufficiently large portion is cached, thereby saving system resources. For each part, when the size of the part is larger than a preset value, the part is stored in a cache, and the calculation of the part can directly call data from the cache, so that the serialization processing speed of the part is accelerated. When the size of the part is smaller than the preset value, the speed of directly calculating the part to obtain the serialization result is higher than the speed of reading the serialization result from the cache, so that the part smaller than the preset value is not required to be stored by the cache. The invention can reasonably utilize the cache and further improve the speed of data serialization.
Preferably, the step of obtaining the serialization result of each part comprises:
each part is serialized, and the serialization result is obtained.
Preferably, the method for dividing the data to be processed into a plurality of parts comprises:
and segmenting the data to be processed according to fields and/or field combinations.
The recording structure of the redis key can be divided into a plurality of fields, for example, the redis key is recorded by the structure of 'name _ time _ address', and the redis key is divided into 3 parts representing the name, the time and the address according to different meanings through underlining.
Preferably, the preset value ranges from 128 bytes to 512 bytes.
Preferably, whether the size of the serialization result of the data to be processed is greater than or equal to a preset threshold value is judged;
if yes, selecting a murmurur _128 hash strategy;
otherwise, fnv1a hash strategy is selected.
Preferably, the preset threshold is 432 bytes.
Preferably, the server is a redis server, the data to be processed is a redis key, and the method includes:
the client divides the redis key into a plurality of parts according to preset conditions in the redis key, and records the sequence of fields in the redis key;
and combining the serialization results of all parts into the serialization result of the redis key according to the sequence.
The field sequence is recorded when the redis key is divided, and after a serialization result is obtained, all parts are recombined according to the field sequence, so that the sequence of all parts is inconvenient, and the parts are transmitted to the server side according to the correct sequence, and the data transmission is more accurate.
Preferably, the method comprises:
calculating the hash value of the obtained serialization result of the redis key;
obtaining a corresponding relation between the redis key and the redis instance according to the hash value;
and sending the redis command corresponding to the redis key to the redis instance according to the corresponding relation.
The invention also provides a client used for the distributed computing system, the distributed computing system comprises a plurality of clients and at least a server, and is characterized in that the client comprises an acquisition module, a segmentation module, an operation module, a combination module and a computing module,
the acquisition module is used for acquiring data to be processed;
the dividing module is used for dividing the data to be processed into a plurality of parts;
the operation module is used for acquiring a serialization result of each part;
the combination module is used for combining the serialization result of each part into the serialization result of the data to be processed;
the calculation module is used for carrying out Hash calculation on the serialization result of the data to be processed, so as to specify the processing example of the data to be processed.
Preferably, the client further comprises a first judging module, a second judging module, a reading module, a storage module and a processing module,
the first judging module is used for judging whether the size of data of one part is larger than a preset value or not, if so, the second judging module is called, and if not, the processing module is called;
the second judging module is used for judging whether the partial serialization result exists in a memory cache region, if so, the reading module is called, and if not, the storage module is called;
the processing module is configured to serialize the portion;
the reading module is used for acquiring a serialization result in the memory cache region;
the storage module is used for serializing the part, acquiring a serialization result of the part and storing the serialization result in the memory cache region.
The invention further provides a distributed computing system, which is characterized in that the distributed computing system comprises a plurality of clients and at least one server.
On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the invention.
The positive progress effects of the invention are as follows:
ensuring a one-to-one correspondence of redis keys and redis instances. Generating a hash value according to the combination of the reds key serialization results, and accurately positioning the reds instance where the key is located by the hash value;
the running speed of the system is improved. In generating the redis key, a portion constituting each of the constituent keys is defined. In serialization, each portion is separated and the larger serialized results are cached. The same key that is re-entered may locate the redis instance using the cached results. The time consumed in locating the redis instance is greatly reduced, thereby increasing the operating speed of the system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a distributed computing system according to embodiment 1 of the present invention.
Fig. 2 is a flowchart of a method of an example of distributed data positioning according to embodiment 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1, the present embodiment provides a distributed computing system 1, where the distributed computing system includes 5 clients 11, 1 server 12, and 1 database 13, the server is a redis server, the redis server includes a plurality of redis instances, and the database is an hbase.
For any one of the 5 clients, the client 11 includes a calculating module 110, an obtaining module 111, a dividing module 112, a first judging module 113, a second judging module 114, an operating module 115, a reading module 116, a storing module 117, a processing module 118, and a combining module 119.
The acquisition module is used for acquiring data to be processed. The data to be processed is a redis key.
The cutting module is used for dividing the data to be processed into a plurality of parts.
The first judging module is used for judging whether one part of bytes is larger than 256 bytes or not, if so, the second judging module is called, and if not, the processing module is called.
256 bytes is a preset value, and the value range is 128 bytes to 512 bytes.
To further increase the speed of the operation, only a sufficiently large portion is cached, thereby saving system resources. For each part, when the size of the part is larger than a preset value, the part is stored in a cache, and the calculation of the part can directly call data from the cache, so that the serialization processing speed of the part is accelerated. When the size of the part is smaller than the preset value, the speed of directly calculating the part to obtain the serialization result is higher than the speed of reading the serialization result from the cache, so that the part smaller than the preset value is not required to be stored by the cache. The invention can reasonably utilize the cache and further improve the speed of data serialization.
The processing module is to serialize the portion.
The second judging module is used for judging whether the partial serialization result exists in the cache or not, if so, the reading module is called, and if not, the storage module is called.
The reading module is used for acquiring the serialization result in the cache.
The storage module is used for serializing the part, acquiring a serialization result of the part and storing the serialization result in the memory cache region.
The operation module is used for acquiring the serialization result of each part.
The combination module is used for combining the serialization results of all the parts into the serialization result of the data to be processed.
The calculation module is used for carrying out Hash calculation on the serialization result of the data to be processed, so as to specify the processing example of the data to be processed.
Referring to fig. 2, using the distributed computing system, the embodiment can implement a method of a distributed data positioning example:
step 100, a client acquires data to be processed, wherein the data to be processed is a rediskey.
Step 101, the client divides the redis key into a plurality of parts according to the fields or the field combination, and records the field sequence in the redis key.
Step 102, for each of the portions, determining whether the byte of the portion is greater than 256 bytes, if so, executing step 103, otherwise, executing step 106.
The preset value of the present embodiment takes an optimal value of 256 bytes, and the preset value can also be selected from a value range of 128 bytes to 512 bytes. Through calculation and experiments, the part of the serialization result which is larger than 256 bytes is read from the cache, and the resources of the system can be effectively saved. When the size of the part is smaller than the preset value, the speed of directly calculating the part to obtain the serialization result is higher than the speed of reading the serialization result from the cache, so that the part smaller than the preset value is not required to be stored by the cache. According to the embodiment, the cache can be reasonably utilized, and the speed of data serialization is further improved.
And 103, judging whether the partial serialization result exists in the cache or not, if so, executing the step 104, and otherwise, executing the step 105.
And step 104, acquiring the serialization result in the cache, and then executing step 107.
Step 105, serializing the part to obtain a serialization result and storing the serialization result in the cache, and then executing step 107.
And 106, serializing the part.
And 107, combining the serialization results of all parts into the serialization result of the rediskey according to the sequence.
Step 108, determining whether the size of the serialization result of the to-be-processed data is greater than or equal to 432 bytes, if so, executing step 109, and otherwise, executing step 110.
Step 109, performing hash calculation on the serialization result of the to-be-processed data by selecting a murmur _128 hash strategy, thereby specifying a processing instance of the processed to-be-processed data, and then ending the flow.
And 110, performing hash calculation on the serialization result of the to-be-processed data by selecting fnv1a hash strategies, so as to specify a processing example of the processed to-be-processed data.
When processing the redis key of "name _ address _ time", the present embodiment divides the redis key into three parts, which are a name part, an address part, and a time part, according to the meaning of each field, and the sequence of each field is fixed, and the three parts arrange the serialization result obtained by each part according to the sequence after completing the respective processing.
The serialization result of the redis key is consistent with the redis key, and the accuracy of data is ensured when the redis key serialization result is sent.
And the processed data to be processed is sent to a redis server of the corresponding redis instance, an intermediate result is obtained from the redis server, and the streaming computing system stores the intermediate result in the database hbase.
The method, the client and the distributed computing system for the distributed data positioning instances can ensure the one-to-one correspondence between the redis keys and the redis instances. And generating a hash value according to the combination of the reds key serialization results, and accurately positioning the reds instance where the key is located by the hash value. In particular, it is possible to increase the system operation speed, and when a redis key is generated, a portion constituting each of the constituent keys is defined. During serialization, each part is separated, and the serialization result with part length larger than 256 bytes is cached. The same key that is re-entered may locate the redis instance using the cached results. The time consumed in locating the redis instance is greatly reduced, thereby increasing the operating speed of the system.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for locating instances of distributed data for use in a distributed computing system, the distributed computing system including a plurality of clients and at least one server, the method comprising:
a target client acquires data to be processed;
dividing the data to be processed into a plurality of parts;
obtaining a serialization result of each part;
combining the serialization result of each part into the serialization result of the data to be processed;
performing hash calculation on the serialization result of the data to be processed so as to designate a processing instance of the data to be processed;
wherein the step of obtaining the serialization result for each portion comprises:
for each part, judging whether the data size of the part is larger than a preset value;
if not, the user can not select the specific application,
serializing the part to obtain a serialization result;
if so,
judging whether the partial serialization result exists in a memory cache region or not, and if so, acquiring the serialization result in the memory cache region;
and if the memory cache region does not exist, serializing the part, acquiring a serialization result of the part, and storing the serialization result in the memory cache region.
2. The method of a distributed data positioning instance of claim 1, wherein said obtaining a serialization result for each portion step comprises:
each part is serialized, and the serialization result is obtained.
3. The method of a distributed data positioning instance of claim 1,
the method for dividing the data to be processed into a plurality of parts comprises the following steps:
and segmenting the data to be processed according to fields and/or field combinations.
4. The method of a distributed data positioning instance of claim 1, wherein the preset value ranges from 128 bytes to 512 bytes.
5. The method of a distributed data positioning instance of claim 1,
judging whether the size of the serialization result of the data to be processed is larger than or equal to a preset threshold value or not;
if yes, selecting a murmurur _128 hash strategy;
otherwise, fnv1a hash strategy is selected.
6. The method of a distributed data location instance of claim 5, wherein the preset threshold is 432 bytes.
7. The method of distributed data positioning instance according to claim 1, wherein the server is a redis server and the data to be processed is a redis key, the method comprising:
the client divides the redis key into a plurality of parts according to preset conditions in the redis key, and records the sequence of fields in the redis key;
and combining the serialization results of all parts into the serialization result of the redis key according to the sequence.
8. The method of a distributed data positioning instance of claim 7, wherein the method comprises:
calculating the hash value of the obtained serialization result of the redis key;
obtaining a corresponding relation between the redis key and the redis instance according to the hash value;
and sending the redis command corresponding to the redis key to the redis instance according to the corresponding relation.
9. A client is used for a distributed computing system, the distributed computing system comprises a plurality of clients and at least one server, and is characterized in that the client comprises an acquisition module, a segmentation module, an operation module, a combination module and a computing module,
the acquisition module is used for acquiring data to be processed;
the dividing module is used for dividing the data to be processed into a plurality of parts;
the operation module is used for acquiring a serialization result of each part;
the combination module is used for combining the serialization result of each part into the serialization result of the data to be processed;
the calculation module is used for carrying out Hash calculation on the serialization result of the data to be processed so as to designate a processing example of the data to be processed;
the client further comprises a first judgment module, a second judgment module, a processing module, a reading module and a storage module:
the first judging module is used for judging whether the size of data of one part is larger than a preset value or not, if so, the second judging module is called, and if not, the processing module is called;
the second judging module is used for judging whether the partial serialization result exists in a memory cache region, if so, the reading module is called, and if not, the storage module is called;
the processing module is configured to serialize the portion;
the reading module is used for acquiring a serialization result in the memory cache region;
the storage module is used for serializing the part, acquiring a serialization result of the part and storing the serialization result in the memory cache region.
10. A distributed computing system comprising a plurality of clients as claimed in claim 9 and at least one server.
CN201610964886.0A 2016-10-29 2016-10-29 Distributed data positioning example method, client and distributed computing system Active CN108009019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610964886.0A CN108009019B (en) 2016-10-29 2016-10-29 Distributed data positioning example method, client and distributed computing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610964886.0A CN108009019B (en) 2016-10-29 2016-10-29 Distributed data positioning example method, client and distributed computing system

Publications (2)

Publication Number Publication Date
CN108009019A CN108009019A (en) 2018-05-08
CN108009019B true CN108009019B (en) 2021-06-22

Family

ID=62048313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610964886.0A Active CN108009019B (en) 2016-10-29 2016-10-29 Distributed data positioning example method, client and distributed computing system

Country Status (1)

Country Link
CN (1) CN108009019B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101505472B (en) * 2008-02-05 2011-07-20 华为技术有限公司 User data server system and apparatus
US10558705B2 (en) * 2010-10-20 2020-02-11 Microsoft Technology Licensing, Llc Low RAM space, high-throughput persistent key-value store using secondary memory
CN103038755B (en) * 2011-08-04 2015-11-25 华为技术有限公司 Method, the Apparatus and system of data buffer storage in multi-node system
CN102591970B (en) * 2011-12-31 2014-07-30 北京奇虎科技有限公司 Distributed key-value query method and query engine system
CN103177120B (en) * 2013-04-12 2016-03-30 同方知网(北京)技术有限公司 A kind of XPath query pattern tree matching method based on index
CN103268318B (en) * 2013-04-16 2016-04-13 华中科技大学 A kind of distributed key value database system of strong consistency and reading/writing method thereof
CN103870393B (en) * 2013-07-09 2017-05-17 上海携程商务有限公司 cache management method and system
CN103488581B (en) * 2013-09-04 2016-01-13 用友网络科技股份有限公司 Data buffering system and data cache method
KR20150103477A (en) * 2014-03-03 2015-09-11 주식회사 티맥스 소프트 Apparatus and method for managing cache in cache distributed environment
CN105653629B (en) * 2015-12-28 2020-03-13 湖南蚁坊软件股份有限公司 Distributed data filtering method based on Hash ring

Also Published As

Publication number Publication date
CN108009019A (en) 2018-05-08

Similar Documents

Publication Publication Date Title
US20180357111A1 (en) Data center operation
CN110941598A (en) Data deduplication method, device, terminal and storage medium
TWI796286B (en) A training method and training system for a machine learning system
CN109726004B (en) Data processing method and device
JP6932360B2 (en) Object search method, device and server
CN115150471B (en) Data processing method, apparatus, device, storage medium, and program product
CN111767320A (en) Data blood relationship determination method and device
CN112115105A (en) Service processing method, device and equipment
CN110222046B (en) List data processing method, device, server and storage medium
CN108009019B (en) Distributed data positioning example method, client and distributed computing system
WO2018082320A1 (en) Data stream join method and device
CN110909085A (en) Data processing method, device, equipment and storage medium
CN112148705A (en) Data migration method and device
CN115904240A (en) Data processing method and device, electronic equipment and storage medium
CN113590322A (en) Data processing method and device
CN114428786A (en) Data processing method and device for distributed pipeline and storage medium
CN113392131A (en) Data processing method and device and computer equipment
CN112487111A (en) Data table association method and device based on KV database
US20180232205A1 (en) Apparatus and method for recursive processing
CN111797158A (en) Data synchronization system, method and computer-readable storage medium
CN110019394A (en) Data query method and apparatus
CN117272970B (en) Document generation method, device, equipment and storage medium
CN112783507B (en) Data stream guiding playback method and device, electronic equipment and readable storage medium
CN113408664B (en) Training method, classification method, device, electronic equipment and storage medium
WO2020211718A1 (en) Data processing method, apparatus and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant