CN106844399A - Distributed data base system and its adaptive approach - Google Patents
Distributed data base system and its adaptive approach Download PDFInfo
- Publication number
- CN106844399A CN106844399A CN201510890348.7A CN201510890348A CN106844399A CN 106844399 A CN106844399 A CN 106844399A CN 201510890348 A CN201510890348 A CN 201510890348A CN 106844399 A CN106844399 A CN 106844399A
- Authority
- CN
- China
- Prior art keywords
- data
- back end
- node
- fragmentation
- triplicate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of distributed data base system and its adaptive approach, the system includes control node, client end AP I and back end, control node is used for the back end of management system, and the data of computing system route and are broadcast to client end AP I and back end;The data operation request that client end AP I is used to provide the interface of read/write data and will receive for data access person route according to the data of local cache, is transmitted to corresponding back end;Back end is used for data storage burst, and is route according to the data of local cache, the data operation request that treatment is received.It is shorter the invention enables data access path, it is in hgher efficiency;And back end divides without active and standby so that system load is more balanced;Data migration process is more smoothed and uniform.
Description
Technical field
The present invention relates to database field, more particularly to a kind of distributed data base system and its self adaptation side
Method.
Background technology
Distributed data base is usually have many to have calculating, storage, the back end of network communication function
The data base cluster system of composition, has the advantages that high-performance, highly reliable, in telecommunications, bank, mutually
The industries such as networking are widely used;Existing distributed data base is by data access agent node and data storage
Node is constituted, and data memory node is divided into multiple data storage clusters according to data key words, per number
There are 1 data storage host node and multiple data storage slave nodes according to storage cluster, host node provides read-write
Data, services, slave node only provides reading data, services, and the data of host node write-in can copy to slave node;
Data access agent node is responsible for the data operation request of proxy data visitor, and is forwarded to corresponding number
According to the respective data storage node processing of storage cluster;This distributed data base factor data node is more,
Interdepended between back end, bring following problem:
1st, access efficiency is low
Have special data access agent node in existing distributed data base, extend data access person's
Data access path, the treatment effeciency of the person that reduces data access;
2nd, internodal data capacity and load are unbalanced
There is active and standby dividing between data memory node so that when the frequency for writing data is higher, can only be in main section
Write data on point, cause the load of host node heavier, easily reach performance bottleneck, and data slave node because
Only provide and read service, and there are multiple nodes, the utilization of resources is insufficient, causes data capacity between back end
Unbalanced with load, there is performance bottleneck or the wasting of resources in partial data node;When certain back end
After failure, its data above can only share treatment by single or partial data node (secondary node),
Load imbalance between aggravation node;
3rd, data distribution is difficult to adjust, and data are difficult to smooth migration
When back end is increased and decreased, particularly under virtualized environment, the elastic telescopic of back end is
Normality, need to often adjust distribution of the data on back end, need to manually perform order or restart and adjust
The process of the distribution of entire data or data distribution adjustment is more long, gives distributed data base stable operation kimonos
Business quality brings larger risk;
4th, state-maintenance is complicated
Active and standby unidirectional replication, host node failure are used between data storage main-standby nodes, it is necessary to re-elect new
Host node, system mode is safeguarded complicated;
For the problem of above distributed data base, usual processing mode is current industry:Data according to
The HASH values of scope or data key words are divided into multiple bursts, according still further to uniformity HASH algorithms,
Burst is evenly distributed on back end, but is not considered for copy (backup) distribution of each burst
Uniformity between node;New problem is brought based on uniformity HASH distribution modes again above, that is, is existed
During increase and decrease node, seldom, the burst for adjusting sometimes is more, and data fragmentation is in node for the burst for adjusting sometimes
Between adjustment be unpredictalbe, the data fragmentation quantity of migration is uncontrollable.
The content of the invention
The embodiment of the present invention provides a kind of distributed data base system and its adaptive approach, existing to improve
Load is unbalanced between the node in distributed data base system, data distribution is difficult to adjust, Data Migration not
Problem that is smooth and safeguarding complexity.
The invention discloses a kind of distributed data base system, said system includes control node, client
API and back end, it is above-mentioned
Control node, for the back end of management system, the data of computing system route and are broadcast to visitor
Family end API and back end;
Client end AP I, the interface for providing read/write data for data access person, and the number that will be received
It is route according to the data of local cache according to operation requests, is transmitted to corresponding back end;
Back end, for data storage burst, and is route according to the data of local cache, and treatment is received
The data operation request for arriving.
Preferably, above-mentioned back end is deployed in said system with virtual machine or calculating storage host mode
In.
Preferably, above-mentioned client end AP is operated with dynamic base or plug-in unit mode for data access person.
Preferably, above-mentioned control node, quantity and state for back end in real-time monitoring system become
Change, and in back end number change, perform node dilatation/capacity reducing operation;Become in back end state
During change, update the data the state of corresponding data node in route and broadcast the data route after updating.
Preferably, above-mentioned client end AP I, the data key words in data operation request are received for basis,
The corresponding data fragmentation of computation requests data, and search each data point in the data route of local cache
Back end where piece;And according to the back end selection rule of local cache, by above-mentioned data behaviour
Corresponding back end is transmitted to as request.
Preferably, above-mentioned back end, for after data operation request is received, in the number of local cache
Whether stored in notebook data node according to the data fragmentation searched in route in above-mentioned data operation request;And
When above-mentioned data fragmentation is not stored in notebook data node, searched in the data route of local cache above-mentioned
Back end where data fragmentation, and above-mentioned data operation request is transmitted to the back end for finding;
When the storage of above-mentioned data fragmentation is in notebook data node, above-mentioned data operation request is performed, and to data
The operation response of visitor's returned data.
Preferably, above-mentioned back end, oneself state is reported for periodicity to above-mentioned control node;With
And in link change, report oneself state to control node in real time;
Above-mentioned control node, for periodically updating data route.
Preferably, above-mentioned back end, for performing data recovery operation and data copy operation;
Above-mentioned control node, for according to default point of domain rule, a point domain being carried out to back end.
The present invention further discloses a kind of adaptive approach of distributed data base system, the above method exists
After system electrification, following steps are performed:
The data of control node computing system route and are broadcast to client API and all back end;
Client end AP I receives the data operation request of visitor, is route according to the data of local cache, will
Above-mentioned request is transmitted to corresponding back end;
The data operation request that back end treatment is received, and returned data operation responds to visitor.
Preferably, above-mentioned control node also performs following steps before the data route of computing system:
According to default point of domain rule, a point domain is carried out to back end.
Preferably, above-mentioned point of domain rule is:If the host/server quantity of back end ownership is 1,
The back end is divided into left domain or right domain;If the host/server quantity of back end ownership is more than
Equal to 2, then the host/server for belonging to according to back end is uniformly distributed principle, and back end is divided
It is left domain and right domain, the back end for belonging to same host/server is located at same domain.
Preferably, above-mentioned control node is calculated according to the back end quantity and data fragmentation quantity of system
The data fragmentation quantity of distribution, generation data route are needed on each back end.
Preferably, above-mentioned client end AP I is route according to the data of local cache, and above-mentioned request is transmitted to
Corresponding back end step is specially:
Data key words in data operation request, calculate corresponding data fragmentation;
The corresponding back end of each data fragmentation is searched in the data route of local cache;
Rule is selected according to default back end, above-mentioned data operation request is transmitted to what is found respectively
Back end.
Preferably, above-mentioned back end selection rule is:
When the corresponding data section points of the data fragmentation for finding are 1, directly please by above-mentioned data manipulation
Ask and be transmitted to above-mentioned back end;
When the corresponding data section points of the data fragmentation for finding are more than 1, judge that above-mentioned data manipulation please
The type asked, if write operation, then the copy of the above-mentioned data fragmentation checked in above-mentioned each back end
Number and back end state, above-mentioned data operation request is sent to state is normal and the small number of copy number
According to node;If read operation, then above-mentioned data operation request is sent to the minimum back end of load.
Preferably, above-mentioned back end processes the data operation request for receiving by the following method:
Whether the data fragmentation in searching above-mentioned data operation request in the data route of local cache stores
In notebook data node;If so, then performing above-mentioned data operation request, and number is returned to data access person
Responded according to operation;Otherwise, the data where searching above-mentioned data fragmentation in the data route of local cache
Node, above-mentioned data operation request is transmitted to the back end for finding.
Preferably, above-mentioned execution data operation request is specially:
When above-mentioned data operation request is write operation, according to the mode of operation of visitor, to data fragmentation
Being stored in local copy is increased, is changed or deletion action;
When above-mentioned data operation request is read operation, it is stored in local copy from data fragmentation and is read
Data.
Preferably, when above method data operation request is write operation, please above-mentioned data manipulation has been processed
After asking, data duplication flow is performed, specially:
The data or total evidence of record data burst change;
Back end where searching above-mentioned data fragmentation remaining copy in the data route of local cache,
The data or total that above-mentioned data fragmentation is changed are replicated to the back end where data fragmentation remaining copy
According to.
Preferably, above-mentioned control node also performs following steps in system operation:
Whether there is back end newly-increased in real-time monitoring system or delete, if there is back end to increase newly,
Perform node dilatation operation;If there is back end to delete, the operation of node capacity reducing is performed.
Preferably, above-mentioned node dilatation operation specifically includes following steps:
Calculating will move to the list of first authentic copy data fragmentation and triplicate data on newly-increased back end
Burst list;
For data fragmentation to be moved into distributes triplicate on newly-increased back end, the number of system is recalculated
According to routeing and broadcast;
Newly-increased back end is waited to recover data;
The oneself state that newly-increased back end is reported is received, according to default dilatation rule, is recalculated and is
The data of system route and broadcast;
Notify that all back end delete the triplicate of local all data fragmentations;
After the completion of confirming that all back end are deleted, the triplicate in local data route is deleted, again
The data of computing system route and broadcast.
Preferably, above-mentioned calculating to move to first authentic copy data fragmentation list on newly-increased back end and
Triplicate data fragmentation listings step is specially:
With data fragmentation sum divided by the back end sum comprising newly-increased back end, calculate every
The average data burst quantity that individual back end to be stored;
The average data burst number being calculated is subtracted with the current data burst quantity of each back end
Amount, calculates the data fragmentation quantity that newly-increased back end should be moved to from each legacy data node;
The newly-increased back end of first authentic copy composition of all data fragmentations to be moved out from legacy data node
First authentic copy data fragmentation list, the second of all data fragmentations to be moved out from legacy data node
The triplicate data fragmentation list of the newly-increased back end of copy composition.
Preferably, above-mentioned default dilatation rule is:
Notify that legacy data node is secondary by the first of the local data fragmentation onto newly-increased back end to be migrated
Originally triplicate is switched to;Notify that newly-increased back end cuts the triplicate of corresponding data fragmentation simultaneously
It is changed to the first authentic copy;
Notify that legacy data node is secondary by the second of the local data fragmentation onto newly-increased back end to be migrated
Originally triplicate is switched to;Notify that newly-increased back end cuts the triplicate of corresponding data fragmentation simultaneously
It is changed to triplicate.
Preferably, above-mentioned node capacity reducing operation specifically includes following steps:
Calculate the list of first authentic copy data fragmentation and triplicate data fragmentation list on each remaining node;
For data fragmentation to be moved into distributes triplicate on remaining data node, the number of system is recalculated
According to routeing and broadcast;
Remainder data node is waited to recover data;
Wait remainder data node replicate data;
The oneself state that remainder data node is reported is received, according to default capacity reducing rule, is recalculated and is
The data of system route and broadcast;
Notify that all back end delete the triplicate of local all data fragmentations;
After the completion of confirming that all back end are deleted, the triplicate in local data route is deleted, again
The data of computing system route and broadcast.
Preferably, the list of first authentic copy data fragmentation and triplicate data on each remaining node of above-mentioned calculating
Burst listings step is specially:
With data fragmentation sum divided by remaining data nodes, each data in remaining data node are calculated
The average data burst quantity that node to be stored;
Current data fragmentation quantity on each remaining data node is subtracted with average data burst quantity, is calculated
Going out on each remaining data node should be from the data fragmentation number for treating that closed node is moved into;
It is according to default data fragmentation Distribution Principles, the data fragmentation first on back end to be deleted is secondary
Sheet and triplicate, are assigned on remaining data node, obtain first authentic copy data on each remaining node
Burst list and triplicate data fragmentation list.
Preferably, above-mentioned default capacity reducing rule is:
Notify that the first authentic copy of data fragmentation to be migrated is switched to triplicate by back end to be deleted;Together
Shi Tongzhi be stored with above-mentioned data fragmentation triplicate remaining data node by the 3rd of above-mentioned data fragmentation the
Copy switches to the first authentic copy;
Notify that the triplicate of data fragmentation to be migrated is switched to triplicate by back end to be deleted;Together
Shi Tongzhi be stored with above-mentioned data fragmentation triplicate remaining data node by the 3rd of above-mentioned data fragmentation the
Copy switches to triplicate.
Preferably, above-mentioned data fragmentation Distribution Principles are:
Data fragmentation quantity on each back end is as far as possible identical;And
The first authentic copy and triplicate of each data fragmentation are distributed on the not back end of same area;And
The triplicate of all first authentic copy data fragmentations is evenly distributed on the institute of foreign lands on each back end
Have on back end.
Preferably, above-mentioned back end recovers data as follows:
Inquiry local data route, obtains on this node where the triplicate of first authentic copy data fragmentation
Back end;
Corresponding data burst is replicated to the back end where triplicate;
Recover to complete, oneself state is reported to control node.
Preferably, above-mentioned increased back end is the back end for newly adding system;
The back end of above-mentioned deletion includes:Because burden less than the back end for needing preset value to delete and
Because receiving the back end that user deletes instruction and requires deletion.
Preferably, above-mentioned client end AP I is by taking HASH values to data key words, then to HASH values
The modulus value mode of data fragmentation sum is taken to determine the burst quantity of request data.
Compared with prior art, the present invention needs not move through special proxy access node, data access path
It is shorter, it is in hgher efficiency;Data fragmentation is stored and managed, and back end divides without active and standby, with many of burst
Copy data can be replicated mutually so that be loaded between the node of distributed data base more balanced;Data route
Automatic to calculate and distribute, data migration process is controllable, more smooths and uniform, without manual intervention, and
Access will not be interrupted.
Brief description of the drawings
Fig. 1 is the block schematic illustration of distributed data base system of the present invention;
Fig. 2 is distributed data base system adaptive approach preferred embodiment flow chart of the present invention;
Fig. 3 is that back end discovery procedure is excellent in distributed data base system adaptive approach of the present invention
Select embodiment flow chart;
Fig. 4 is back end condition managing mistake in distributed data base system adaptive approach of the present invention
Journey preferred embodiment flow chart;
Fig. 5 is data duplication preferred embodiment in distributed data base system adaptive approach of the present invention
Flow chart;
Fig. 6 is that distributed data base system adaptive approach interior joint dilatation operation of the present invention is preferred real
Apply a flow chart;
Fig. 7 is that distributed data base system adaptive approach interior joint capacity reducing operation of the present invention is preferred real
Apply a flow chart;
Fig. 8 is back end recovery data mistake in distributed data base system adaptive approach of the present invention
Journey preferred embodiment flow chart;
In order that technical scheme is clearer, clear, make further in detail below in conjunction with accompanying drawing
State.
Specific embodiment
It should be appreciated that specific embodiment described herein is only used to explain the present invention, it is not used to limit
The present invention.
As shown in figure 1, being the block schematic illustration of distributed data base system of the present invention;The present embodiment
Including control node 10, client end AP I 20, back end 30, the present embodiment includes 4 back end
30;Wherein,
Control node 10, for the back end 30 of management system, the data of computing system route and broadcast
To client end AP I 20 and back end 30;Specifically include:
Data are periodically updated to route and broadcast;
The quantity and state change of back end 30 in real-time monitoring system, and back end in systems
During 30 number change, node dilatation/capacity reducing operation is performed;
In 30 state change of back end, the state of corresponding data node 30 in route is updated the data simultaneously
Data after broadcast updates route;And
According to default point of domain rule, a point domain is carried out to back end 30;
Above-mentioned point of domain rule be:
If the host/server quantity of back end ownership is 1, the back end is divided into left domain
Or right domain;If the host/server quantity of back end ownership is more than or equal to 2, return according to back end
The host/server of category is uniformly distributed principle (even if the host/server quantity being distributed in left domain and right domain
It is as far as possible identical), back end is divided into left domain and right domain, make to belong to the data section of same host/server
Point is located at same domain.
For example, as shown in figure 1, from left to right numbering is 1-4 successively by 4 back end;If 4 numbers
According to node-home in 1 host/server, then 4 back end are all divided into left domain or the right side
Domain;If 4 data node-homes are in 2 host/servers, it is assumed that numbering is 1 and 2 data section
Point belongs to the first host/server, and the back end of numbering 3 and 4 belongs to the second host/server;
The back end 1 and 2 that the first host/server will then be belonged to is divided into left domain, will belong to the second master
The back end 3 and 4 of machine/server is divided into right domain, then possess 2 back end under each domain;
Or assume that the back end that numbering is 1,2 and 3 belongs to the first host/server, numbering is 4 number
According to node-home in the second host/server, then the back end 1,2 of the first host/server will be belonged to
Left domain is divided into 3, the back end 4 that will belong to the second host/server is divided into right domain, then
Left domain possesses 3 back end;Right domain possesses 1 back end;
In order to realize the reliability of the balanced and data of data fragmentation, control node 10 calculates data route should
Meet data below burst Distribution Principles:
Data fragmentation quantity on each back end is as far as possible identical;And
The first authentic copy and triplicate of each data fragmentation are distributed on the not back end of same area;And
The triplicate of all first authentic copy data fragmentations is evenly distributed on the institute of foreign lands on each back end
Have on back end;For example current data node is located at left domain, and 10 the first of data fragmentation are had thereon
Copy, according to above Distribution Principles, the triplicate of this 10 data fragmentations should be evenly distributed in right domain
On all back end, it is assumed that right domain there are 2 back end, then it is distributed on each back end in right domain
There are 5 in the triplicate of above-mentioned 10 data fragmentations.
As shown in figure 1, in the present embodiment, distributed data base system has 4 back end 30, altogether
Be stored with 16 data fragmentations, and the first authentic copy of data fragmentation is marked with numeral 1-16 respectively;Triplicate
Marked with digital 1 ' -16 ' respectively, 4 the first of data fragmentation are preserved on each back end 30
Copy and 4 triplicates of data fragmentation;The number in data fragmentation and triplicate in the first authentic copy
It is entirely different according to burst.
Client end AP I 20, interface for providing read/write data for data access person, and will receive
Data operation request route according to the data of local cache, is sent to corresponding back end 30;Specially:
According to the data key words received in data operation request, corresponding data fragmentation is calculated, and at this
Back end 30 where searching each data fragmentation in the data route of ground caching;Calculate data fragmentation
Algorithm can be that data key words are taken with HASH values, then the modulus value of data fragmentation sum is taken to HASH values
Mode determines the burst quantity of request data;Can also be according to the prefix of data key words, suffix scope
To divide data fragmentation;
According to the back end selection rule of local cache, the data operation request is transmitted to accordingly
Back end 30;
Client end AP I 20 is operated in dynamic base/plug-in unit mode for data access person;
Back end 30, is disposed in systems with virtual machine or calculating storage host mode, can be configured
It is attributed to left domain or right domain;For:
Data storage burst;
It according to data key words data cutting is multiple bursts, the data of different bursts that data fragmentation refers to
Difference, each data fragmentation has the first authentic copy, triplicate and triplicate, and triplicate is only in increase and decrease
Used temporarily during back end, the data between multiple copies are identicals, and same data fragmentation
Multiple copies are stored on the not back end of same area according to data fragmentation Distribution Principles;
The data route that caching is received, and the data operation request that treatment is received, data operation request bag
Include reading and writing operation;Specially:After data operation request is received, in the data route of local cache
Search whether the data fragmentation in the data operation request is stored in notebook data node 30;And described
When data fragmentation is not stored in notebook data node 30, the number is searched in the data route of local cache
According to the back end 30 where burst, and the data operation request is transmitted to the back end 30 for finding;
When data fragmentation storage is in notebook data node 30, the data operation request is performed, and to number
Operated according to visitor's returned data and responded;
Restart or data are route when changing, perform data recovery operation;
When data fragmentation changes, for example, data fragmentation content alteration after write operation is performed, record change
Data or total evidence, and perform data copy operation;By the data of change or it is total according to copy to containing
On other back end 30 of identical data burst;
Periodically oneself state is reported to the control node 10;And in link change, in real time to control
Node processed 10 reports oneself state.
The topology of distributed data base system of the present invention is hidden to data visitor, realizes distributed data
Storehouse and the decoupling of data visitor.
As shown in Fig. 2 being distributed data base system adaptive approach preferred embodiment stream of the present invention
Cheng Tu;The present embodiment is comprised the following steps:
Step S101:System electrification, control node 10 according to default point of domain rule, to back end
30 carry out a point domain, then the data route of computing system, and are broadcast to client API 20 and all data sections
Point 30;
This step is former according to the quantity of back end 30 of system, data fragmentation quantity and default router-level topology
Then, first authentic copy list and the triplicate of the data fragmentation that distribution is needed on each back end 30 are calculated
List, generation data route.
Control node 10 is also responsible for back end and finds and condition managing in system operation, process
Respectively as shown in Figures 3 and 4;
Step S102:After the completion of system initialization, the data manipulation that client end AP I 20 receives visitor please
Ask;
Step S103:Data key words in data operation request, calculate corresponding data fragmentation;
This step is by using taking HASH values to data key words, then to take data fragmentation to HASH values total
The mode of several modulus value determines the burst quantity of request data;Can also according to the prefix of data key words,
Suffix scope divides data fragmentation;
Step S104:The corresponding back end of each data fragmentation is searched in the data route of local cache
30, according to default back end selection rule, the data operation request is transmitted to accordingly respectively
Back end 30;
Data route is the corresponding relation of each data fragmentation and back end 30.
Back end selection rule is:When the corresponding number of back end 30 of the data fragmentation for finding is 1,
The data operation request is directly transmitted to the back end 30;
When the corresponding number of back end 30 of the data fragmentation for finding is more than 1, the data manipulation is judged
The type of request, if write operation, then the data fragmentation checked in described each back end 30
The state of copy number and back end 30, state is sent to normally and copy number by the data operation request
Small back end 30;If read operation, then the data operation request is sent to the minimum number of load
According to node 30.
Step S105:The data operation request that back end 30 is received, in the data route of local cache
Search whether the data fragmentation in the data operation request is stored in notebook data node 30;If so, then
Perform step S106;Otherwise, step S107 is performed;
This step checks the data point of request data by parsing the data key words in data operation request
Whether piece belongs to this node;If so, then the corresponding data fragmentation storage of the request data is in notebook data section
Point 30, otherwise, the corresponding data fragmentation of the request data is not stored in notebook data node 30.
Step S106:The data operation request is performed, is operated to data access person returned data and responded,
Current data burst treatment terminates;
In this step, perform data operation request and be specially:
When the data operation request is write operation, according to the mode of operation of visitor, to data fragmentation
Being stored in local copy is increased, is changed or deletion action;
When the data operation request is read operation, it is stored in local copy from data fragmentation and is read
Data.
In the present invention, when data operation request is write operation, after having processed the data operation request,
Also perform data duplication flow as shown in Figure 5;I.e. after the data that back end 30 changes local, need
On the back end 30 where other copies of the data duplication after change to same burst.
Step S107:Back end 30 where searching the data fragmentation in the data route of local cache,
According to default back end selection rule, by the data operation request be transmitted to accordingly with this node
Communicate normal back end.
The even corresponding data fragmentation of data operation request, then in processing locality, is read in notebook data node 30
The data on handwritten copy ground;If the corresponding data fragmentation of data operation request is forwarded not in notebook data node 30
To corresponding node processing.
As shown in figure 3, being back end hair in distributed data base system adaptive approach of the present invention
Existing process preferred embodiment flow chart;The present embodiment is comprised the following steps:
Step S201:Whether there is back end 30 newly-increased in the real-time monitoring system of control node 10 or delete
Remove, if finding to there is back end 30 to increase newly, perform step S202;If it was found that thering is back end 30 to delete
Remove, then perform step S203;
Newly-increased back end is the back end of new addition;
The back end of deletion includes:The back end of deletion is needed less than preset value and because receiving because of burden
Instruction is deleted to user and require the back end deleted.
Step S202:Node dilatation operation is performed, current discovery treatment terminates;
Node dilatation operation is specific as shown in Figure 6;
Step S203:The operation of node capacity reducing is performed, current discovery treatment terminates.
The operation of node capacity reducing is specific as shown in Figure 7.
As shown in figure 4, being data section point-like in distributed data base system adaptive approach of the present invention
State manages process preferred embodiment flow chart;The present embodiment is comprised the following steps:
Step S301:Control node 10 receives the oneself state that back end 30 is reported;
Step S302:The state is checked, if being normal, current state treatment terminates;If abnormal,
Then perform step S303;
Step S303:The state of back end 30 described in route is updated the data, and broadcasts the number after updating
According to route.
As shown in figure 5, being that data duplication is excellent in distributed data base system adaptive approach of the present invention
Select embodiment flow chart;The present embodiment is comprised the following steps:
Step S301:The back end 30 of execution write operation records the data fragmentation change of this write operation
Data or total evidence;
Step S302:Where the data fragmentation remaining copy being searched in the data route of local cache
Back end 30;
Step S303:The data fragmentation is replicated to the back end 30 where data fragmentation remaining copy to become
Data or total evidence more.
The data or total evidence for replicating change are arrived with the back end 30 where other copies of burst, including
Allow to be stored with after the write-in data of back end 30 of the first authentic copy, the data or total evidence for replicating change are arrived
Back end 30 where second, third copy of the burst, also allows be stored with second or the 3rd
After the write-in data of back end 30 of copy, replicate change data or it is total according to the burst first,
It is mutual between the back end 30 where triplicate or first, second copy, i.e. permission data trnascription
Replicate, the identical data between the copy of same slice mutually replicates collision problem that may be present, can pass through
Timestamp is solved, i.e., determined by comparing the renewal timestamp of data be by merge, cover come
Change data are also to give up change.
During data duplication, the back end of data is replicated, corresponding data renewal can be synchronously completed,
Also can asynchronous completion corresponding data renewal.
As shown in fig. 6, being distributed data base system adaptive approach interior joint dilatation behaviour of the present invention
Make preferred embodiment flow chart;The present embodiment is comprised the following steps:
Step S401:Control node 10 calculates the first authentic copy number that move on newly-increased back end 30
According to burst list and triplicate data fragmentation list;Specifically include following steps:
With data fragmentation sum divided by the back end sum comprising newly-increased back end 30, calculate
The average data burst quantity that each back end to be stored, should be than the current data of legacy data node 30
Burst quantity is few;
The average data point being calculated is subtracted with the current data burst quantity of each legacy data node 30
Piece quantity, calculates the data point that newly-increased back end 30 should be moved to from each legacy data node 30
Piece quantity;
The newly-increased data section of first authentic copy composition of all data fragmentations to be moved out from legacy data node 30
The first authentic copy data fragmentation list of point 30, all data to be moved out from legacy data node 30 point
The triplicate data fragmentation list of the newly-increased back end 30 of triplicate composition of piece;In list now
Data for sky;
Step S402:For data fragmentation to be moved into distributes triplicate on newly-increased back end 30;Again
The data of computing system route and broadcast;
Step S403:Newly-increased back end 30 is waited to recover data;
It is as shown in Figure 8 that back end recovers data procedures;
Step S404:The oneself state that newly-increased back end 30 is reported is received, according to default dilatation rule,
The data for recalculating system route and broadcast;
The default dilatation rule is:
Notify legacy data node 30 by the local data fragmentation onto newly-increased back end 30 to be migrated
The first authentic copy switches to triplicate;Notify newly-increased back end by the 3rd of corresponding data fragmentation the simultaneously
Copy switches to the first authentic copy;
Notify legacy data node 30 by the local data fragmentation onto newly-increased back end 30 to be migrated
Triplicate switches to triplicate;Notify newly-increased back end 30 by the of corresponding data fragmentation simultaneously
Three copies switch to triplicate.
Step S405:Notify that all back end 30 delete the triplicate of local all data fragmentations;
Step S406:After the completion of confirming that all back end 30 are deleted, the in local data route is deleted
Three copies, the data for recalculating system route and broadcast.
As shown in fig. 7, being distributed data base system adaptive approach interior joint capacity reducing behaviour of the present invention
Make preferred embodiment flow chart;The present embodiment is comprised the following steps:
Step S501:Control node 10 calculates the first authentic copy data fragmentation row of each remaining data node 30
Table and triplicate data fragmentation list;This step specifically includes following steps:
Counted divided by remaining data node 30 with data fragmentation sum, calculated every in remaining data node 30
The average data burst quantity that individual back end 30 to be stored, should be more than before reduction node;
Current data fragmentation quantity on each remaining data node 30 is subtracted with average data burst quantity, is counted
Calculating should be from the data fragmentation number for treating that closed node is moved on each remaining data node 30;
According to default data fragmentation Distribution Principles, by the data fragmentation first on back end to be deleted 30
Copy and triplicate, are assigned on remaining data node 30, obtain the first authentic copy on each remaining node
Data fragmentation list and triplicate data list list;
Step S502:For data fragmentation to be moved into distributes triplicate on remaining data node 30, again
The data of computing system route and broadcast;
Step S503:Remaining data node 30 is waited to recover data;
It is as shown in Figure 8 that back end 30 recovers data procedures;
Step S504:Wait the replicate data of remaining data node 30;
The replicate data process of back end 30 is as shown in Figure 5;
Step S505:The oneself state that remaining data node 30 is reported is received, according to default capacity reducing rule,
The data for recalculating system route and broadcast;
Default capacity reducing rule is:
Notify that the first authentic copy of data fragmentation to be migrated is switched to triplicate by back end to be deleted 30;
Notify to be stored with simultaneously the data fragmentation triplicate remaining data node 30 by the data fragmentation
Triplicate switches to the first authentic copy;
Notify that the triplicate of data fragmentation to be migrated is switched to triplicate by back end to be deleted 30;
Notify to be stored with simultaneously the data fragmentation triplicate remaining data node 30 by the data fragmentation
Triplicate switches to triplicate.
Step S506:Notify that all back end 30 delete the triplicate of local all data fragmentations;
Step S507:After the completion of confirming that all back end 30 are deleted, the in local data route is deleted
Three copies, the data for recalculating system route and broadcast.
As shown in figure 8, being that back end is extensive in distributed data base system adaptive approach of the present invention
Complex data process preferred embodiment flow chart;The present embodiment is comprised the following steps:
Step S601:Inquiry local data route, obtains the 3rd of first authentic copy data fragmentation on this node
Back end 30 where copy;
Step S602:Corresponding data burst is replicated to the back end 30 where triplicate;
The back end 30 of data fragmentation is received, the data fragmentation that will be received is stored in corresponding triplicate;
Step S603:After the completion of all first authentic copy data fragmentations recover, reported certainly to control node 10
Body state.
The preferred embodiments of the present invention are the foregoing is only, the scope of the claims of the invention is not thereby limited,
Equivalent structure that every utilization description of the invention and accompanying drawing content are made or flow conversion, or directly or
Connect and be used in other related technical fields, be included within the scope of the present invention.
Claims (28)
1. a kind of distributed data base system, it is characterised in that the system includes control node, client
End API and back end, it is described
Control node, for the back end of management system, the data of computing system route and are broadcast to visitor
Family end API and back end;
Client end AP I, the interface for providing read/write data for data access person, and the number that will be received
It is route according to the data of local cache according to operation requests, is transmitted to corresponding back end;
Back end, for data storage burst, and is route according to the data of local cache, and treatment is received
The data operation request for arriving.
2. distributed data base system as claimed in claim 1, it is characterised in that the back end
Disposed in the system with virtual machine or calculating storage host mode.
3. distributed data base system as claimed in claim 1, it is characterised in that the client
AP is operated with dynamic base or plug-in unit mode for data access person.
4. the distributed data base system as described in claim any one of 1-3, it is characterised in that
The control node, for the quantity and state change of back end in real-time monitoring system, and
During back end number change, node dilatation/capacity reducing operation is performed;In back end state change, more
New data route in corresponding data node state and broadcast update after data route.
5. the distributed data base system as described in claim any one of 1-3, it is characterised in that
The client end AP I, please for according to the data key words received in data operation request, calculating
The corresponding data fragmentation of data is sought, and where each data fragmentation is searched during the data of local cache route
Back end;And according to the back end selection rule of local cache, by the data operation request
It is transmitted to corresponding back end.
6. distributed data base system as claimed in claim 4, it is characterised in that
The back end, for after data operation request is received, in the data route of local cache
Search whether the data fragmentation in the data operation request is stored in notebook data node;And in the number
When being not stored in notebook data node according to burst, the data fragmentation is searched in the data route of local cache
The back end at place, and the data operation request is transmitted to the back end for finding;In the number
During according to burst storage in notebook data node, the data operation request is performed, and return to data access person
Return data manipulation response.
7. distributed data base system as claimed in claim 1, it is characterised in that
The back end, oneself state is reported for periodicity to the control node;And in link
During change, oneself state is reported to control node in real time;
The control node, for periodically updating data route.
8. distributed data base system as claimed in claim 1, it is characterised in that the back end,
For performing data recovery operation and data copy operation;
The control node, for according to default point of domain rule, a point domain being carried out to back end.
9. a kind of adaptive approach of distributed data base system, it is characterised in that methods described is in system
After upper electricity, following steps are performed:
The data of control node computing system route and are broadcast to client API and all back end;
Client end AP I receives the data operation request of visitor, is route according to the data of local cache, will
The request is transmitted to corresponding back end;
The data operation request that back end treatment is received, and returned data operation responds to visitor.
10. the adaptive approach of distributed data base system as claimed in claim 9, it is characterised in that
The control node also performs following steps before the data route of computing system:
According to default point of domain rule, a point domain is carried out to back end.
The adaptive approach of 11. distributed data base systems as claimed in claim 10, it is characterised in that
Described point of domain rule be:If the host/server quantity of back end ownership is 1, by the data section
Point is divided into left domain or right domain;If the host/server quantity of back end ownership is more than or equal to 2, press
Host/server according to back end ownership is uniformly distributed principle, and back end is divided into left domain and the right side
Domain, makes the back end for belonging to same host/server be located at same domain.
The adaptive approach of 12. distributed data base system as described in claim 9 or 10, its feature
It is that the control node is calculated per number according to the back end quantity and data fragmentation quantity of system
According to the data fragmentation quantity that distribution is needed on node, generation data route.
The adaptive approach of 13. distributed data base system as described in claim 9 or 10, its feature
It is that the client end AP I route according to the data of local cache, forwards the request to corresponding
Back end step is specially:
Data key words in data operation request, calculate corresponding data fragmentation;
The corresponding back end of each data fragmentation is searched in the data route of local cache;
Rule is selected according to default back end, the data operation request is transmitted to what is found respectively
Back end.
The adaptive approach of 14. distributed data base systems as claimed in claim 13, it is characterised in that
The back end selection rule is:
When the corresponding data section points of the data fragmentation for finding are 1, directly please by the data manipulation
Ask and be transmitted to the back end;
When the corresponding data section points of the data fragmentation for finding are more than 1, judge that the data manipulation please
The type asked, if write operation, then the copy of the data fragmentation checked in described each back end
Number and back end state, the data operation request is sent to state is normal and the small number of copy number
According to node;If read operation, then the data operation request is sent to the minimum back end of load.
The adaptive approach of 15. distributed data base system as described in claim 9 or 10, its feature
It is that the back end processes the data operation request for receiving by the following method:
Whether the data fragmentation in searching the data operation request in the data route of local cache stores
In notebook data node;If so, then performing the data operation request, and number is returned to data access person
Responded according to operation;Otherwise, the data where searching the data fragmentation in the data route of local cache
Node, the data operation request is transmitted to the back end for finding.
The adaptive approach of 16. distributed data base systems as claimed in claim 15, it is characterised in that
The execution data operation request is specially:
When the data operation request is write operation, according to the mode of operation of visitor, to data fragmentation
Being stored in local copy is increased, is changed or deletion action;
When the data operation request is read operation, it is stored in local copy from data fragmentation and is read
Data.
The adaptive approach of 17. distributed data base systems as claimed in claim 16, it is characterised in that
When methods described data operation request is write operation, after the data operation request has been processed, number is performed
According to flow is replicated, specially:
The data or total evidence of record data burst change;
Back end where searching the data fragmentation remaining copy in the data route of local cache,
The data or total that the data fragmentation is changed are replicated to the back end where data fragmentation remaining copy
According to.
The adaptive approach of 18. distributed data base system as described in claim 9 or 10, its feature
It is that the control node also performs following steps in system operation:
Whether there is back end newly-increased in real-time monitoring system or delete, if there is back end to increase newly,
Perform node dilatation operation;If there is back end to delete, the operation of node capacity reducing is performed.
The adaptive approach of 19. distributed data base systems as claimed in claim 18, it is characterised in that
The node dilatation operation specifically includes following steps:
Calculating will move to the list of first authentic copy data fragmentation and triplicate data on newly-increased back end
Burst list;
For data fragmentation to be moved into distributes triplicate on newly-increased back end, the number of system is recalculated
According to routeing and broadcast;
Newly-increased back end is waited to recover data;
The oneself state that newly-increased back end is reported is received, according to default dilatation rule, is recalculated and is
The data of system route and broadcast;
Notify that all back end delete the triplicate of local all data fragmentations;
After the completion of confirming that all back end are deleted, the triplicate in local data route is deleted, again
The data of computing system route and broadcast.
The adaptive approach of 20. distributed data base systems as claimed in claim 19, it is characterised in that
The calculating will move to the list of first authentic copy data fragmentation and triplicate data on newly-increased back end
Burst listings step is specially:
With data fragmentation sum divided by the back end sum comprising newly-increased back end, calculate every
The average data burst quantity that individual back end to be stored;
The average data burst number being calculated is subtracted with the current data burst quantity of each back end
Amount, calculates the data fragmentation quantity that newly-increased back end should be moved to from each legacy data node;
The newly-increased back end of first authentic copy composition of all data fragmentations to be moved out from legacy data node
First authentic copy data fragmentation list, the second of all data fragmentations to be moved out from legacy data node
The triplicate data fragmentation list of the newly-increased back end of copy composition.
The adaptive approach of 21. distributed data base systems as claimed in claim 19, it is characterised in that
The default dilatation rule is:
Notify that legacy data node is secondary by the first of the local data fragmentation onto newly-increased back end to be migrated
Originally triplicate is switched to;Notify that newly-increased back end cuts the triplicate of corresponding data fragmentation simultaneously
It is changed to the first authentic copy;
Notify that legacy data node is secondary by the second of the local data fragmentation onto newly-increased back end to be migrated
Originally triplicate is switched to;Notify that newly-increased back end cuts the triplicate of corresponding data fragmentation simultaneously
It is changed to triplicate.
The adaptive approach of 22. distributed data base systems as claimed in claim 18, it is characterised in that
The node capacity reducing operation specifically includes following steps:
Calculate the list of first authentic copy data fragmentation and triplicate data fragmentation list on each remaining node;
For data fragmentation to be moved into distributes triplicate on remaining data node, the number of system is recalculated
According to routeing and broadcast;
Remainder data node is waited to recover data;
Wait remainder data node replicate data;
The oneself state that remainder data node is reported is received, according to default capacity reducing rule, is recalculated and is
The data of system route and broadcast;
Notify that all back end delete the triplicate of local all data fragmentations;
After the completion of confirming that all back end are deleted, the triplicate in local data route is deleted, again
The data of computing system route and broadcast.
The adaptive approach of 23. distributed data base systems as claimed in claim 22, it is characterised in that
It is described to calculate the list of first authentic copy data fragmentation and triplicate data fragmentation listings step on each remaining node
Specially:
With data fragmentation sum divided by remaining data nodes, each data in remaining data node are calculated
The average data burst quantity that node to be stored;
Current data fragmentation quantity on each remaining data node is subtracted with average data burst quantity, is calculated
Going out on each remaining data node should be from the data fragmentation number for treating that closed node is moved into;
It is according to default data fragmentation Distribution Principles, the data fragmentation first on back end to be deleted is secondary
Sheet and triplicate, are assigned on remaining data node, obtain first authentic copy data on each remaining node
Burst list and triplicate data fragmentation list.
The adaptive approach of 24. distributed data base systems as claimed in claim 22, it is characterised in that
The default capacity reducing rule is:
Notify that the first authentic copy of data fragmentation to be migrated is switched to triplicate by back end to be deleted;Together
Shi Tongzhi be stored with the data fragmentation triplicate remaining data node by the 3rd of the data fragmentation the
Copy switches to the first authentic copy;
Notify that the triplicate of data fragmentation to be migrated is switched to triplicate by back end to be deleted;Together
Shi Tongzhi be stored with the data fragmentation triplicate remaining data node by the 3rd of the data fragmentation the
Copy switches to triplicate.
The adaptive approach of 25. distributed data base systems as claimed in claim 23, it is characterised in that
The data fragmentation Distribution Principles are:
Data fragmentation quantity on each back end is as far as possible identical;And
The first authentic copy and triplicate of each data fragmentation are distributed on the not back end of same area;And
The triplicate of all first authentic copy data fragmentations is evenly distributed on the institute of foreign lands on each back end
Have on back end.
The adaptive approach of 26. distributed data base system as described in claim 19 or 22, it is special
Levy and be, the back end recovers data as follows:
Inquiry local data route, obtains on this node where the triplicate of first authentic copy data fragmentation
Back end;
Corresponding data burst is replicated to the back end where triplicate;
Recover to complete, oneself state is reported to control node.
The adaptive approach of 27. distributed data base systems as claimed in claim 18, it is characterised in that
The increased back end is the back end for newly adding system;
The back end of the deletion includes:Because burden less than the back end for needing preset value to delete and
Because receiving the back end that user deletes instruction and requires deletion.
The adaptive approach of 28. distributed data base systems as claimed in claim 13, it is characterised in that
The client end AP I is by taking HASH values to data key words, then takes data fragmentation to HASH values
The modulus value mode of sum determines the burst quantity of request data.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510890348.7A CN106844399B (en) | 2015-12-07 | 2015-12-07 | Distributed database system and self-adaptive method thereof |
PCT/CN2016/103964 WO2017097059A1 (en) | 2015-12-07 | 2016-10-31 | Distributed database system and self-adaptation method therefor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510890348.7A CN106844399B (en) | 2015-12-07 | 2015-12-07 | Distributed database system and self-adaptive method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106844399A true CN106844399A (en) | 2017-06-13 |
CN106844399B CN106844399B (en) | 2022-08-09 |
Family
ID=59012671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510890348.7A Active CN106844399B (en) | 2015-12-07 | 2015-12-07 | Distributed database system and self-adaptive method thereof |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106844399B (en) |
WO (1) | WO2017097059A1 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107273187A (en) * | 2017-06-29 | 2017-10-20 | 深信服科技股份有限公司 | Reading position acquisition methods and device, computer installation, readable storage medium storing program for executing |
CN108073696A (en) * | 2017-12-11 | 2018-05-25 | 厦门亿力吉奥信息科技有限公司 | GIS application processes based on distributed memory database |
CN108319656A (en) * | 2017-12-29 | 2018-07-24 | 中兴通讯股份有限公司 | Realize the method, apparatus and calculate node and system that gray scale is issued |
CN108664222A (en) * | 2018-05-11 | 2018-10-16 | 北京奇虎科技有限公司 | A kind of block catenary system and its application process |
CN108712488A (en) * | 2018-05-11 | 2018-10-26 | 北京奇虎科技有限公司 | A kind of data processing method based on block chain, device, block catenary system |
CN108737534A (en) * | 2018-05-11 | 2018-11-02 | 北京奇虎科技有限公司 | A kind of data transmission method, device, block catenary system based on block chain |
CN108845892A (en) * | 2018-04-19 | 2018-11-20 | 北京百度网讯科技有限公司 | Data processing method, device, equipment and the computer storage medium of distributed data base |
CN108881415A (en) * | 2018-05-31 | 2018-11-23 | 广州亿程交通信息集团有限公司 | Distributed big data analysis system in real time |
CN109189561A (en) * | 2018-08-08 | 2019-01-11 | 广东亿迅科技有限公司 | A kind of transacter and its method based on MPP framework |
CN109933568A (en) * | 2019-03-13 | 2019-06-25 | 安徽海螺集团有限责任公司 | A kind of industry big data platform system and its querying method |
CN110175069A (en) * | 2019-05-20 | 2019-08-27 | 广州南洋理工职业学院 | Distributing real time system system and method based on broadcast channel |
CN111090687A (en) * | 2019-12-24 | 2020-05-01 | 腾讯科技(深圳)有限公司 | Data processing method, device and system and computer readable storage medium |
CN111291124A (en) * | 2020-02-12 | 2020-06-16 | 杭州涂鸦信息技术有限公司 | Data storage method, system and equipment thereof |
CN111338806A (en) * | 2020-05-20 | 2020-06-26 | 腾讯科技(深圳)有限公司 | Service control method and device |
CN111400112A (en) * | 2020-03-18 | 2020-07-10 | 深圳市腾讯计算机***有限公司 | Writing method and device of storage system of distributed cluster and readable storage medium |
CN111538772A (en) * | 2020-04-14 | 2020-08-14 | 北京宝兰德软件股份有限公司 | Data exchange processing method and device, electronic equipment and storage medium |
CN112084267A (en) * | 2020-07-29 | 2020-12-15 | 北京思特奇信息技术股份有限公司 | Method for solving global broadcast of distributed database |
WO2021147926A1 (en) * | 2020-01-20 | 2021-07-29 | Huawei Technologies Co., Ltd. | Methods and systems for hybrid edge replication |
CN113535656A (en) * | 2021-06-25 | 2021-10-22 | 中国人民大学 | Data access method, device, equipment and storage medium |
CN114237520A (en) * | 2022-02-28 | 2022-03-25 | 广东睿江云计算股份有限公司 | Ceph cluster data balancing method and system |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106844399B (en) * | 2015-12-07 | 2022-08-09 | 中兴通讯股份有限公司 | Distributed database system and self-adaptive method thereof |
CN107579865A (en) * | 2017-10-18 | 2018-01-12 | 北京奇虎科技有限公司 | Right management method, the apparatus and system of distributed code server |
CN112214466B (en) * | 2019-07-12 | 2024-05-14 | 海能达通信股份有限公司 | Distributed cluster system, data writing method, electronic equipment and storage device |
CN111835848B (en) * | 2020-07-10 | 2022-08-23 | 北京字节跳动网络技术有限公司 | Data fragmentation method and device, electronic equipment and computer readable medium |
CN113312005B (en) * | 2021-06-22 | 2022-11-01 | 青岛理工大学 | Block chain-based Internet of things data capacity expansion storage method and system and computing equipment |
CN117667944B (en) * | 2023-12-12 | 2024-06-18 | 支付宝(杭州)信息技术有限公司 | Copy capacity expansion method, device and system for distributed graph database |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7761407B1 (en) * | 2006-10-10 | 2010-07-20 | Medallia, Inc. | Use of primary and secondary indexes to facilitate aggregation of records of an OLAP data cube |
CN103078927A (en) * | 2012-12-28 | 2013-05-01 | 合一网络技术(北京)有限公司 | Key-value data distributed caching system and method thereof |
CN103095806A (en) * | 2012-12-20 | 2013-05-08 | 中国电力科学研究院 | Load balancing management system of large-power-network real-time database system |
CN103324539A (en) * | 2013-06-24 | 2013-09-25 | 浪潮电子信息产业股份有限公司 | Job scheduling management system and method |
CN103475566A (en) * | 2013-07-10 | 2013-12-25 | 北京发发时代信息技术有限公司 | Real-time message exchange platform and distributed cluster establishment method |
CN103516809A (en) * | 2013-10-22 | 2014-01-15 | 浪潮电子信息产业股份有限公司 | High-scalability and high-performance distributed storage system structure |
CN103780482A (en) * | 2012-10-22 | 2014-05-07 | 华为技术有限公司 | Content obtaining method and user equipment and cache node |
CN103838770A (en) * | 2012-11-26 | 2014-06-04 | ***通信集团北京有限公司 | Logic data partition method and system |
CN104317899A (en) * | 2014-10-24 | 2015-01-28 | 西安未来国际信息股份有限公司 | Big-data analyzing and processing system and access method |
CN104333512A (en) * | 2014-10-30 | 2015-02-04 | 北京思特奇信息技术股份有限公司 | Distributed memory database access system and method |
CN104380690A (en) * | 2012-06-15 | 2015-02-25 | 阿尔卡特朗讯 | Architecture of privacy protection system for recommendation services |
CN105007238A (en) * | 2015-07-22 | 2015-10-28 | 中国船舶重工集团公司第七0九研究所 | Implementation method and system for lightweight cross-platform message-oriented middle-ware |
WO2017097059A1 (en) * | 2015-12-07 | 2017-06-15 | 中兴通讯股份有限公司 | Distributed database system and self-adaptation method therefor |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104283906B (en) * | 2013-07-02 | 2018-06-19 | 华为技术有限公司 | Distributed memory system, clustered node and its section management method |
CN103870602B (en) * | 2014-04-03 | 2017-05-31 | 中国科学院地理科学与资源研究所 | Database space burst clone method and system |
CN104239417B (en) * | 2014-08-19 | 2017-06-09 | 天津南大通用数据技术股份有限公司 | Dynamic adjusting method and device after a kind of distributed data base data fragmentation |
CN104615657A (en) * | 2014-12-31 | 2015-05-13 | 天津南大通用数据技术股份有限公司 | Expanding and shrinking method for distributed cluster with nodes supporting multiple data fragments |
-
2015
- 2015-12-07 CN CN201510890348.7A patent/CN106844399B/en active Active
-
2016
- 2016-10-31 WO PCT/CN2016/103964 patent/WO2017097059A1/en active Application Filing
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7761407B1 (en) * | 2006-10-10 | 2010-07-20 | Medallia, Inc. | Use of primary and secondary indexes to facilitate aggregation of records of an OLAP data cube |
CN104380690A (en) * | 2012-06-15 | 2015-02-25 | 阿尔卡特朗讯 | Architecture of privacy protection system for recommendation services |
CN103780482A (en) * | 2012-10-22 | 2014-05-07 | 华为技术有限公司 | Content obtaining method and user equipment and cache node |
CN103838770A (en) * | 2012-11-26 | 2014-06-04 | ***通信集团北京有限公司 | Logic data partition method and system |
CN103095806A (en) * | 2012-12-20 | 2013-05-08 | 中国电力科学研究院 | Load balancing management system of large-power-network real-time database system |
CN103078927A (en) * | 2012-12-28 | 2013-05-01 | 合一网络技术(北京)有限公司 | Key-value data distributed caching system and method thereof |
CN103324539A (en) * | 2013-06-24 | 2013-09-25 | 浪潮电子信息产业股份有限公司 | Job scheduling management system and method |
CN103475566A (en) * | 2013-07-10 | 2013-12-25 | 北京发发时代信息技术有限公司 | Real-time message exchange platform and distributed cluster establishment method |
CN103516809A (en) * | 2013-10-22 | 2014-01-15 | 浪潮电子信息产业股份有限公司 | High-scalability and high-performance distributed storage system structure |
CN104317899A (en) * | 2014-10-24 | 2015-01-28 | 西安未来国际信息股份有限公司 | Big-data analyzing and processing system and access method |
CN104333512A (en) * | 2014-10-30 | 2015-02-04 | 北京思特奇信息技术股份有限公司 | Distributed memory database access system and method |
CN105007238A (en) * | 2015-07-22 | 2015-10-28 | 中国船舶重工集团公司第七0九研究所 | Implementation method and system for lightweight cross-platform message-oriented middle-ware |
WO2017097059A1 (en) * | 2015-12-07 | 2017-06-15 | 中兴通讯股份有限公司 | Distributed database system and self-adaptation method therefor |
Non-Patent Citations (2)
Title |
---|
K. YAMAMOTO 等: "Analysis of distributed route selection scheme in wireless ad hoc networks", 《2004 IEEE 15TH INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE CAT. NO.04TH8754)》 * |
陈森利 等: "电力计量采集***中分布式缓存***研究", 《信息技术》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107273187A (en) * | 2017-06-29 | 2017-10-20 | 深信服科技股份有限公司 | Reading position acquisition methods and device, computer installation, readable storage medium storing program for executing |
CN108073696A (en) * | 2017-12-11 | 2018-05-25 | 厦门亿力吉奥信息科技有限公司 | GIS application processes based on distributed memory database |
CN108073696B (en) * | 2017-12-11 | 2020-10-27 | 厦门亿力吉奥信息科技有限公司 | GIS application method based on distributed memory database |
CN108319656A (en) * | 2017-12-29 | 2018-07-24 | 中兴通讯股份有限公司 | Realize the method, apparatus and calculate node and system that gray scale is issued |
CN108845892A (en) * | 2018-04-19 | 2018-11-20 | 北京百度网讯科技有限公司 | Data processing method, device, equipment and the computer storage medium of distributed data base |
CN108664222B (en) * | 2018-05-11 | 2020-05-15 | 北京奇虎科技有限公司 | Block chain system and application method thereof |
CN108712488B (en) * | 2018-05-11 | 2021-09-10 | 北京奇虎科技有限公司 | Data processing method and device based on block chain and block chain system |
CN108664222A (en) * | 2018-05-11 | 2018-10-16 | 北京奇虎科技有限公司 | A kind of block catenary system and its application process |
CN108737534B (en) * | 2018-05-11 | 2021-08-24 | 北京奇虎科技有限公司 | Block chain-based data transmission method and device and block chain system |
CN108737534A (en) * | 2018-05-11 | 2018-11-02 | 北京奇虎科技有限公司 | A kind of data transmission method, device, block catenary system based on block chain |
CN108712488A (en) * | 2018-05-11 | 2018-10-26 | 北京奇虎科技有限公司 | A kind of data processing method based on block chain, device, block catenary system |
CN108881415A (en) * | 2018-05-31 | 2018-11-23 | 广州亿程交通信息集团有限公司 | Distributed big data analysis system in real time |
CN108881415B (en) * | 2018-05-31 | 2020-11-17 | 广州亿程交通信息集团有限公司 | Distributed real-time big data analysis system |
CN109189561A (en) * | 2018-08-08 | 2019-01-11 | 广东亿迅科技有限公司 | A kind of transacter and its method based on MPP framework |
CN109933568A (en) * | 2019-03-13 | 2019-06-25 | 安徽海螺集团有限责任公司 | A kind of industry big data platform system and its querying method |
CN110175069A (en) * | 2019-05-20 | 2019-08-27 | 广州南洋理工职业学院 | Distributing real time system system and method based on broadcast channel |
CN110175069B (en) * | 2019-05-20 | 2023-11-14 | 广州南洋理工职业学院 | Distributed transaction processing system and method based on broadcast channel |
CN111090687A (en) * | 2019-12-24 | 2020-05-01 | 腾讯科技(深圳)有限公司 | Data processing method, device and system and computer readable storage medium |
CN111090687B (en) * | 2019-12-24 | 2023-03-10 | 腾讯科技(深圳)有限公司 | Data processing method, device and system and computer readable storage medium |
WO2021147926A1 (en) * | 2020-01-20 | 2021-07-29 | Huawei Technologies Co., Ltd. | Methods and systems for hybrid edge replication |
CN111291124A (en) * | 2020-02-12 | 2020-06-16 | 杭州涂鸦信息技术有限公司 | Data storage method, system and equipment thereof |
CN111400112B (en) * | 2020-03-18 | 2021-04-13 | 深圳市腾讯计算机***有限公司 | Writing method and device of storage system of distributed cluster and readable storage medium |
CN111400112A (en) * | 2020-03-18 | 2020-07-10 | 深圳市腾讯计算机***有限公司 | Writing method and device of storage system of distributed cluster and readable storage medium |
CN111538772A (en) * | 2020-04-14 | 2020-08-14 | 北京宝兰德软件股份有限公司 | Data exchange processing method and device, electronic equipment and storage medium |
CN111338806A (en) * | 2020-05-20 | 2020-06-26 | 腾讯科技(深圳)有限公司 | Service control method and device |
CN112084267A (en) * | 2020-07-29 | 2020-12-15 | 北京思特奇信息技术股份有限公司 | Method for solving global broadcast of distributed database |
CN112084267B (en) * | 2020-07-29 | 2024-06-07 | 北京思特奇信息技术股份有限公司 | Method for solving global broadcasting of distributed database |
CN113535656A (en) * | 2021-06-25 | 2021-10-22 | 中国人民大学 | Data access method, device, equipment and storage medium |
CN113535656B (en) * | 2021-06-25 | 2022-08-09 | 中国人民大学 | Data access method, device, equipment and storage medium |
CN114237520A (en) * | 2022-02-28 | 2022-03-25 | 广东睿江云计算股份有限公司 | Ceph cluster data balancing method and system |
CN114237520B (en) * | 2022-02-28 | 2022-07-08 | 广东睿江云计算股份有限公司 | Ceph cluster data balancing method and system |
Also Published As
Publication number | Publication date |
---|---|
WO2017097059A1 (en) | 2017-06-15 |
CN106844399B (en) | 2022-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106844399A (en) | Distributed data base system and its adaptive approach | |
US11360854B2 (en) | Storage cluster configuration change method, storage cluster, and computer system | |
US20190213175A1 (en) | Data migration method and system | |
CN102148850B (en) | Cluster system and service processing method thereof | |
CN109597567B (en) | Data processing method and device | |
US10831612B2 (en) | Primary node-standby node data transmission method, control node, and database system | |
US20090157776A1 (en) | Repartitioning live data | |
JP7270755B2 (en) | Metadata routing in distributed systems | |
US20140108358A1 (en) | System and method for supporting transient partition consistency in a distributed data grid | |
CN108833503A (en) | A kind of Redis cluster method based on ZooKeeper | |
KR101670343B1 (en) | Method, device, and system for peer-to-peer data replication and method, device, and system for master node switching | |
CN109992206B (en) | Data distribution storage method and related device | |
US11068537B1 (en) | Partition segmenting in a distributed time-series database | |
JP2003022209A (en) | Distributed server system | |
CN113010496B (en) | Data migration method, device, equipment and storage medium | |
CN105472002A (en) | Session synchronization method based on instant copying among cluster nodes | |
CN109918021B (en) | Data processing method and device | |
CN110569302A (en) | method and device for physical isolation of distributed cluster based on lucene | |
CN104410531B (en) | The system architecture method of redundancy | |
CN107682411A (en) | A kind of extensive SDN controllers cluster and network system | |
JP2007524325A (en) | Non-stop service system using voting and information updating and providing method in the system | |
CN107656980B (en) | Method applied to distributed database system and distributed database system | |
CN107395406B (en) | Online state data processing method, device and system of online system | |
CN107908713A (en) | A kind of distributed dynamic cuckoo filtration system and its filter method based on Redis clusters | |
JP7398567B2 (en) | Dynamic adaptive partitioning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |