CN110336891A - Data cached location mode, equipment, storage medium and device - Google Patents
Data cached location mode, equipment, storage medium and device Download PDFInfo
- Publication number
- CN110336891A CN110336891A CN201910674458.8A CN201910674458A CN110336891A CN 110336891 A CN110336891 A CN 110336891A CN 201910674458 A CN201910674458 A CN 201910674458A CN 110336891 A CN110336891 A CN 110336891A
- Authority
- CN
- China
- Prior art keywords
- physical server
- data cached
- hash
- node
- annulus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/104—Peer-to-peer [P2P] networks
- H04L67/1061—Peer-to-peer [P2P] networks using node-based peer discovery mechanisms
- H04L67/1065—Discovery involving distributed pre-established resource-based relationships among peers, e.g. based on distributed hash tables [DHT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of data cached location modes, equipment, storage medium and device, the present invention obtains the configuration information of each physical server in distributed cache system, dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server, it is receiving when data cached, based on to data cached cryptographic Hash, it determines described to the data cached position in the Hash annulus, based on default mapping ruler, by described on the data cached first object dummy node being mapped on the Hash annulus, and it described will store to data cached into first object physical server corresponding with the first object dummy node, it is that each physical services distribute corresponding dummy node the present invention is based on the configuration information of each physical server, to which the physical server caching for keeping performance better is more Data, allow in distributed cache system in the case where each physical server performance configuration is irregular, load data is more balanced.
Description
Technical field
The present invention relates to distributed caching technical field more particularly to a kind of data cached location mode, equipment, storage Jie
Matter and device.
Background technique
Redis cluster using client sliced fashion realize data distribution when, it is common to use be based on dummy node
Consistency hash algorithm.The algorithm guarantees dummy node number on Hash annulus, empty much larger than the number of physical server node
Quasi- node number is more, and virtual Hash annulus is just got thinner, and the data payload of each dummy node is just more close, then again
These dummy nodes are respectively equably with physical node phase mapping, accordingly even when can also make when server node number is less
The data load balance degree of each physical node is similar, to avoid data skew problem.But the consistency based on dummy node
There is also following disadvantages for hash algorithm:
When Redis cluster carries out data distribution using the consistency hash algorithm based on dummy node, if cluster uses
When N platform configures different cache server node, then the process performance of every cache server node can be variant, based on void
For the consistency hash algorithm of quasi- node when to physical cache server node distribution dummy node, there is no consider server
The problem of capable of having differences, in this way caused by result be exactly the different cache server node of performance all load it is about the same
Data volume, do not play the advantage of the good cache server node of performance, while to the cache server of performance difference yet
Node brings pressure, keeps the data payload of distributed cache system unbalanced.
Summary of the invention
The main purpose of the present invention is to provide a kind of data cached location mode, equipment, storage medium and devices, it is intended to
Solve the unbalanced technical problem of data payload of physical server in current distributed cache system.
To achieve the above object, the present invention provides a kind of data cached location mode, the described method comprises the following steps:
Obtain the configuration information of each physical server in distributed cache system;
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server;
It is receiving when data cached, to described to data cached carry out Hash operation, is obtaining and the number to be cached
According to corresponding cryptographic Hash;
Based on the cryptographic Hash, determine described to the data cached position in the Hash annulus;
Based on default mapping ruler, virtually saved described to the data cached first object being mapped on the Hash annulus
Point on;
Determine first object physical server corresponding with the first object dummy node, and will be described to data cached
It stores in the first object physical server.
Preferably, described to obtain in distributed cache system before the configuration information of each physical server, the method is also
Include:
Multiple dummy nodes are distributed evenly on the Hash annulus;
Correspondingly, the configuration information based on each physical server determines corresponding with each physical server in Hash annulus
Dummy node, specifically include:
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server
Number and position.
Preferably, the configuration information is memory capacity;
Correspondingly, the configuration information based on each physical server is respectively that each physical server determines in Hash annulus
The number of dummy node corresponding with each physical server and position, specifically include:
Select a physical server as benchmark physical services from the physical server of the distributed cache system
Device, and the first weight is set for the benchmark physical server;
The memory capacity of each physical server in the distributed cache system is obtained, the benchmark physical services are based on
The memory capacity and benchmark weight of device, other the second power of physical servers setting in the respectively described distributed cache system
Value;
Based on first weight and the second weight, dummy node corresponding with each physical server in Hash ring is determined
Number and position.
Preferably, determination first object physical server corresponding with the first object dummy node, and by institute
State to it is data cached storage into the first object physical server after, the method also includes:
By described on data cached the second destination virtual node being mapped on the Hash annulus;
Determine the second target physical server corresponding with the second destination virtual node, and will be described to data cached
It stores in second target physical server.
Preferably, it is described by it is described on data cached the second destination virtual node being mapped on the Hash annulus it
Before, the method also includes:
Based on described to data cached historical requests number;
Described when data cached historical requests number meets preset condition, execution is described will be described to data cached
The step being mapped on the second destination virtual node on the Hash annulus.
Preferably, it is described by described on data cached the second destination virtual node being mapped on the Hash annulus,
It specifically includes:
Using the first object dummy node as initial virtual node;
According to preset direction, the current virtual section that initial virtual node described in distance is nearest on the Hash annulus is obtained
Point, and obtain present physical server corresponding with the current virtual node;
Judge whether the present physical server and the first object physical server are same physical server;
When the present physical device and the target physical device are different server, using the current virtual node as
The second destination virtual node;
When the present physical server and the target physical server are same server, by the current virtual
Node is repeated as the initial virtual node according to preset direction, is obtained first described in distance on the Hash annulus
The nearest current virtual node of beginning dummy node, and obtain the step of present physical server corresponding with the current virtual node
Suddenly.
Preferably, the configuration information based on each physical server is that each physical server determines in Hash annulus respectively
After dummy node corresponding with each physical server, the method also includes:
By the physical server and dummy node corresponding with each physical server be added to the physical server with
In the mapping relations of dummy node;
Correspondingly, determination first object physical server corresponding with the first object dummy node, and by institute
It states to data cached storage into the first object physical server, specifically includes:
By searching for the mapping relations, first object physical services corresponding with the first object dummy node are determined
Device, and described will store to data cached into the first object server.
In addition, to achieve the above object, the present invention also provides a kind of data cached distribution apparatus, the equipment includes: to deposit
Reservoir, processor and it is stored in the data cached distribution program that can be run on the memory and on the processor, it is described
The step of data cached location mode as described above is realized when data cached distribution program is executed by the processor.
In addition, to achieve the above object, the present invention also provides a kind of storage medium, caching is stored on the storage medium
Data distribution program realizes data cached location mode as described above when the data cached distribution program is executed by processor
The step of.
In addition, to achieve the above object, the present invention also provides a kind of data cached distribution apparatus, the data cached distribution
Device includes:
Module is obtained, for obtaining the configuration information of each physical server in distributed cache system;
First determining module, for based on the configuration information of each physical server determine in Hash annulus with each physical services
The corresponding dummy node of device;
Receiving module, for receiving when data cached, to described to data cached carry out Hash operation, obtain with
It is described to data cached corresponding cryptographic Hash;
Second determining module, for being based on the cryptographic Hash, determine it is described to data cached in the Hash annulus
Position;
Mapping block, for being mapped to described on the Hash annulus to data cached based on default mapping ruler
On first object dummy node;
Memory module, for determining first object physical server corresponding with the first object dummy node, and will
It is described to store to data cached into the first object physical server.
In the present invention, by obtaining the configuration information of each physical server in distributed cache system, it is based on each physics
The configuration information of server determines dummy node corresponding with each physical server in Hash annulus, is receiving to data cached
When, to described to data cached carry out Hash operation, obtain being based on the Hash to data cached corresponding cryptographic Hash with described
It is worth, it is determining described to the data cached position in the Hash annulus, based on default mapping ruler, to data cached by described in
It is mapped on the first object dummy node on the Hash annulus, determines and the first object dummy node corresponding first
Target physical server, and described will store to data cached into the first object physical server, the present invention is based on each
The configuration information of physical server is that each physical services distribute corresponding dummy node, thus the physical server for keeping performance better
More data are cached, are allowed in distributed cache system in the case where each physical server performance configuration is irregular, load
Data are more balanced.
Detailed description of the invention
Fig. 1 is the device structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram of the data cached location mode first embodiment of the present invention;
Fig. 3 is the flow diagram of the data cached location mode second embodiment of the present invention;
Fig. 4 is the functional block diagram of the data cached distribution apparatus first embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Referring to Fig.1, Fig. 1 is the device structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
As shown in Figure 1, the equipment may include: processor 1001, such as CPU, communication bus 1002, user interface
1003, network interface 1004, memory 1005.Wherein, communication bus 1002 is for realizing the connection communication between these components.
User interface 1003 may include display screen (Display), optional user interface 1003 can also include standard wireline interface,
Wireless interface.Network interface 1004 optionally may include standard wireline interface and wireless interface (such as Wi-Fi interface).Storage
Device 1005 can be high speed RAM memory, be also possible to stable memory (non-volatile memory), such as disk
Memory.Memory 1005 optionally can also be the storage server independently of aforementioned processor 1001.
It will be understood by those skilled in the art that structure shown in Fig. 1 is not constituted to the data cached distribution apparatus
Restriction, may include perhaps combining certain components or different component layouts than illustrating more or fewer components.
As shown in Figure 1, as may include operating device, network communication mould in a kind of memory 1005 of storage medium
Block, Subscriber Interface Module SIM and data cached distribution program.
In equipment shown in Fig. 1, network interface 1004 is mainly used for connecting background server, with the background server
Carry out data communication;User interface 1003 is mainly used for connecting user equipment;The equipment calls storage by processor 1001
The data cached distribution program stored in device 1005, and execute data cached location mode provided in an embodiment of the present invention.
The equipment calls the data cached distribution program that stores in memory 1005 by processor 1001, and execute with
Lower operation:
Obtain the configuration information of each physical server in distributed cache system;
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server;
It is receiving when data cached, to described to data cached carry out Hash operation, is obtaining and the number to be cached
According to corresponding cryptographic Hash;
Based on the cryptographic Hash, determine described to the data cached position in the Hash annulus;
Based on default mapping ruler, virtually saved described to the data cached first object being mapped on the Hash annulus
Point on;
Determine first object physical server corresponding with the first object dummy node, and will be described to data cached
It stores in the first object physical server.
Further, processor 1001 can call the data cached distribution program stored in memory 1005, also execute
It operates below:
Multiple dummy nodes are distributed evenly on the Hash annulus;
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server
Number and position.
Further, processor 1001 can call the data cached distribution program stored in memory 1005, also execute
It operates below:
Select a physical server as benchmark physical services from the physical server of the distributed cache system
Device, and the first weight is set for the benchmark physical server;
The memory capacity of each physical server in the distributed cache system is obtained, the benchmark physical services are based on
The memory capacity and benchmark weight of device, other the second power of physical servers setting in the respectively described distributed cache system
Value;
Based on first weight and the second weight, dummy node corresponding with each physical server in Hash ring is determined
Number and position.
Further, processor 1001 can call the data cached distribution program stored in memory 1005, also execute
It operates below:
By described on data cached the second destination virtual node being mapped on the Hash annulus;
Determine the second target physical server corresponding with the second destination virtual node, and will be described to data cached
It stores in second target physical server.
Further, processor 1001 can call the data cached distribution program stored in memory 1005, also execute
It operates below:
Based on described to data cached historical requests number;
Described when data cached historical requests number meets preset condition, execution is described will be described to data cached
The step being mapped on the second destination virtual node on the Hash annulus.
Further, processor 1001 can call the data cached distribution program stored in memory 1005, also execute
It operates below:
Using the first object dummy node as initial virtual node;
According to preset direction, the current virtual section that initial virtual node described in distance is nearest on the Hash annulus is obtained
Point, and obtain present physical server corresponding with the current virtual node;
Judge whether the present physical server and the first object physical server are same physical server;
When the present physical device and the target physical device are different server, using the current virtual node as
The second destination virtual node;
When the present physical server and the target physical server are same server, by the current virtual
Node is repeated as the initial virtual node according to preset direction, is obtained first described in distance on the Hash annulus
The nearest current virtual node of beginning dummy node, and obtain the step of present physical server corresponding with the current virtual node
Suddenly.
Further, processor 1001 can call the data cached distribution program stored in memory 1005, also execute
It operates below:
By the physical server and dummy node corresponding with each physical server be added to the physical server with
In the mapping relations of dummy node;
By searching for the mapping relations, first object physical services corresponding with the first object dummy node are determined
Device, and described will store to data cached into the first object server.
In the present embodiment, the configuration information for obtaining each physical server in distributed cache system is taken based on each physics
The configuration information of business device determines dummy node corresponding with each physical server in Hash annulus, is receiving to data cached
When, to described to data cached carry out Hash operation, obtain being based on the Hash to data cached corresponding cryptographic Hash with described
It is worth, it is determining described to the data cached position in the Hash annulus, based on default mapping ruler, to data cached by described in
It is mapped on the first object dummy node on the Hash annulus, determines and the first object dummy node corresponding first
Target physical server, and described will store to data cached into the first object physical server, the present invention is based on each
The configuration information of physical server is that each physical services distribute corresponding dummy node, thus the physical server for keeping performance better
More data are cached, are allowed in distributed cache system in the case where each physical server performance configuration is irregular, load
Data are more balanced.
Based on above-mentioned hardware configuration, the embodiment of the data cached location mode of the present invention is proposed.
Referring to Fig. 2, Fig. 2 is the flow diagram of the data cached location mode first embodiment of the present invention.
In the first embodiment, the data cached location mode the following steps are included:
Step S10: the configuration information of each physical server in distributed cache system is obtained.
It should be noted that the configuration information can be memory capacity, process performance and the bandwidth of each physical server
Deng can be used to the information for characterizing the performance condition of each physical server.
Step S20: based on each physical server configuration information determine it is corresponding with each physical server in Hash annulus
Dummy node.
It should be noted that consistency hash algorithm is the most common algorithm of distributed cache system data distribution, this hair
On the basis of bright scheme is the consistency hash algorithm existing based on dummy node, to existing one based on dummy node
Cause property hash algorithm improves, thus solve the problems, such as that data distribution is unbalanced in existing distributed cache system, therefore,
In order to which the implementation of this present invention program is easier to understand, existing spy corresponds to consistency hash algorithm and based on dummy node
Consistency hash algorithm is simply introduced.
Consistency hash algorithm mainly generates the one 32 hash values skies without symbol shaping by some hash function
Between, space size is [0,232-1], it is controlled by algorithm logic, so that [0,232-1] this linear space joins end to end (i.e. 0
=232), to constitute a virtual Hash annulus.The realization of consistency hash algorithm can be divided into following steps:
1) first according to some characteristic value of cache server node, for example, cache server node IP address or
MAC Address calculates its hash value as characteristic value, by hash function, is mapped to cache server further according to its hash value
In the virtual Hash annulus constructed before.
2) again the keyword key of data cached object, its hash value is calculated by identical hash function, equally
According to obtained hash value, data cached resource impact into virtual Hash annulus.
3) it finally provides, if the position of data resource mapping, with certain cache server in virtual Hash annulus
Mapping position is identical, and just data resource is saved on the cache server.Remainder data resource is suitable since the position of mapping
Hour hands are searched, the first cache server node encountered, the as cache server of this data resource.If it exceeds
232Still it can not find memory node, then default is mapped to the data resource on first cache server node.
The advantages of consistency hash algorithm, is, changes when server node number occurs in distributed cache system
When, it being capable of the smallest mapping position for influencing data resource script.But there is also following disadvantages for consistency hash algorithm.
When cache server node number is less in distributed cache system, since server buffer node passes through Hash
Later, the position on virtual annulus has randomness, and consistency hash algorithm may be since cache server node be virtual
It is unevenly distributed on Hash annulus, and causes the unbalanced problem of data distribution, influence the overall performance of distributed cache system.
Because consistency hash algorithm can have the above drawback in data distribution, in actual items application, respectively
Large enterprises are more likely to select the consistency hash algorithm based on dummy node.
It faces for consistency hash algorithm and is unevenly distributed on virtual Hash annulus because of cache server node, thus
The problem of causing data skew, the consistency hash algorithm based on dummy node pass through on the basis of consistency hash algorithm
The concept of dummy node is introduced, to solve the problems, such as to be easy to cause data skew when server node is less.It is so-called virtual
Node is exactly so that physical cache server node is no longer map to virtual Hash annulus on the basis of consistency hash algorithm
On, and be mapped on virtual Hash annulus using dummy node, and annulus is divided into several equal portions, each dummy node is only
There is a physical node to map therewith, a physical node there can be multiple dummy nodes to map therewith.
The present invention is to solve the consistency hash algorithm based on dummy node and distributing virtual section to physical server
When point, not accounting for server performance is had differences, so that the different cache server of performance all loads data about the same
Amount, causes the unbalanced problem of the data payload of distributed cache system.
The present embodiment can construct the Hash annulus based on dummy node, by multiple virtual sections before step S10 first
Point is distributed evenly on the Hash annulus, and correspondingly, step S20 is specifically included:
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of physical server
Number and position.
It is understood that this programme is the configuration information based on each physical server, the property of each physical server is determined
Can be strong and weak, it is the dummy node that the strong physical server of performance distributes greater number, the weak physical server of performance is at all less
The dummy node of number, thus keep the data volume of each physical server load and the performance power of each physical server proportional,
So that the data distribution of entire distributed cache system is more balanced.
In the following, by configuration information be memory capacity for, illustrate how the configuration information based on each physical server
Respectively each physical server determines the number of dummy node corresponding with each physical server and position in Hash ring:
Select a physical server as benchmark physical services from the physical server of the distributed cache system
Device, and the first weight is set for the benchmark physical server;
The memory capacity of each physical server in the distributed cache system is obtained, the benchmark physical services are based on
The memory capacity and benchmark weight of device, other the second power of physical servers setting in the respectively described distributed cache system
Value;
For example, the memory capacity of the benchmark physical server selected is S0, by the first power of the benchmark physical server
Value is set as W0, obtain the memory capacity S of other physical servers in the distributed cache systemi, the distributed cache system
Second weight W of middle others physical serveriIt can be calculated by following formula (1):
Wi=W0*(Si/S0) formula (1)
Based on first weight and the second weight, dummy node corresponding with each physical server in Hash ring is determined
Number and position.
Based on first weight and the second weight, according to the weight of each physical server in the distributed cache system
Size determines the number of dummy node corresponding with each physical server in Hash ring.
In this implementation, after the number that dummy node corresponding with each physical server has been determined, several there can be phase
Node with interval distributes to the same physical server, several adjacent nodes can also be distributed to the same physical services
Device, for example, being provided with 16 dummy nodes on a Hash annulus, label is respectively 1,2,3,4, until 15, passes through institute
Weight is stated, dummy node corresponding with A physical server there are 4 on determining Hash annulus, in determining and A physical server pair
When the position for the dummy node answered, dummy node corresponding with A physical server can be selected as marked as 1,5,9 and 13
Dummy node also can choose as the dummy node marked as 6,7,8 and 9.
Step S30: receiving when data cached, to described to data cached carry out Hash operation, obtain with it is described
To data cached corresponding cryptographic Hash.
Step S40: being based on the cryptographic Hash, determines described to the data cached position in the Hash annulus.
Step S50: based on default mapping ruler, by described to data cached the first mesh being mapped on the Hash annulus
It marks on dummy node.
It should be noted that the default mapping ruler, can will find with described wait cache to search clockwise
Data are apart from nearest dummy node as the first object dummy node, or counterclockwise search, will find with
The dummy node nearest to data cached distance is as the first object dummy node, of course, it is also possible to which there are other
Mapping ruler, the present embodiment is without restriction to this.
Step S60: determining corresponding with first object dummy node first object physical server, and will described in
Data cached storage is into the first object physical server.
In specific implementation, first object physical services corresponding with the first object dummy node are determined in order to improve
The efficiency of device can be in each physical server respectively determining Hash annulus and each in the configuration information based on each physical server
After the corresponding dummy node of physical server, the physical server and dummy node corresponding with each physical server are added
It adds in the mapping relations of the physical server and dummy node, by searching for the mapping relations, determines and described first
The corresponding first object physical server of destination virtual node, and described the first object service will be arrived to data cached storage
In device.
In the present embodiment, by obtaining the configuration information of each physical server in distributed cache system, it is based on each physics
The configuration information of server determines dummy node corresponding with each physical server in Hash annulus, is receiving to data cached
When, to described to data cached carry out Hash operation, obtain being based on the Hash to data cached corresponding cryptographic Hash with described
It is worth, it is determining described to the data cached position in the Hash annulus, based on default mapping ruler, to data cached by described in
It is mapped on the first object dummy node on the Hash annulus, determines and the first object dummy node corresponding first
Target physical server, and described will store to data cached into the first object physical server, the present invention is based on each
The configuration information of physical server is that each physical services distribute corresponding dummy node, thus the physical server for keeping performance better
More data are cached, are allowed in distributed cache system in the case where each physical server performance configuration is irregular, load
Data are more balanced.
Referring to Fig. 3, Fig. 3 is the flow diagram of the data cached location mode second embodiment of the present invention, is based on above-mentioned Fig. 2
Shown in embodiment, propose the second embodiment of the data cached location mode of the present invention.
In a second embodiment, after the step S60, the method also includes:
Step S70: by described on data cached the second destination virtual node being mapped on the Hash annulus.
In specific implementation, can be by following steps, determining the second destination virtual node on the Hash annulus,
And it is mapped to described on the second destination virtual node to data cached:
Using the first object dummy node as initial virtual node;
According to preset direction, the current virtual section that initial virtual node described in distance is nearest on the Hash annulus is obtained
Point, and obtain present physical server corresponding with the current virtual node;
Judge whether the present physical server and the first object physical server are same physical server;
When the present physical device and the target physical device are different server, using the current virtual node as
The second destination virtual node;
When the present physical server and the target physical server are same server, by the current virtual
Node is repeated as the initial virtual node according to preset direction, is obtained first described in distance on the Hash annulus
The nearest current virtual node of beginning dummy node, and obtain the step of present physical server corresponding with the current virtual node
Suddenly.
It should be noted that " first " of " the first object dummy node " and " the second destination virtual node " " the
Two " do not constitute the limitation to the destination virtual node, are only intended to distinguish different destination virtual nodes.
The present embodiment first determines whether first object before will be to data cached be mapped on the second destination virtual node
Whether dummy node physical server corresponding with the destination virtual node that will be mapped is different physical servers, to protect
It demonstrate,proves, is stored in same in different physical servers to data cached.
Further, before executing step S70, can also obtain it is described to data cached historical requests number,
It is described when data cached historical requests number meets preset condition, then execute step S70.
It is understood that will when it is data cached be buffered in two physical servers respectively when, be to solve some object
In the case that there is delay machine or cache invalidation in reason server, the problem of the decline of the cache hit rate of distributed cache system, but
It is, while also occupies the storage space of two physical servers, therefore, is being mapped to the Hash to data cached for described
Before on the second destination virtual node on ring, data cached significance level can be treated and assessed, described wait cache
When the significance level of data meets preset requirement, then it will be stored in two physical servers to data cached, it specifically, can be with
It is assessed to data cached significance level described by described to data cached historical requests number.
Step S80: determining corresponding with the second destination virtual node the second target physical server, and will described in
Data cached storage is into second target physical server.
In the present embodiment, two dummy nodes will be mapped to data cached, thus be cached in two physical servers,
So that in the case where some physical server delay machine or cache invalidation in distributed cache system, the caching of lifting system
Hit rate.
In addition, the embodiment of the present invention also proposes a kind of storage medium, data cached distribution is stored on the storage medium
Program realizes following operation when the data cached distribution program is executed by processor:
Obtain the configuration information of each physical server in distributed cache system;
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server;
It is receiving when data cached, to described to data cached carry out Hash operation, is obtaining and the number to be cached
According to corresponding cryptographic Hash;
Based on the cryptographic Hash, determine described to the data cached position in the Hash annulus;
Based on default mapping ruler, virtually saved described to the data cached first object being mapped on the Hash annulus
Point on;
Determine first object physical server corresponding with the first object dummy node, and will be described to data cached
It stores in the first object physical server.
Further, following operation is also realized when the data cached distribution program is executed by processor:
Multiple dummy nodes are distributed evenly on the Hash annulus;
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server
Number and position.
Further, following operation is also realized when the data cached distribution program is executed by processor:
Select a physical server as benchmark physical services from the physical server of the distributed cache system
Device, and the first weight is set for the benchmark physical server;
The memory capacity of each physical server in the distributed cache system is obtained, the benchmark physical services are based on
The memory capacity and benchmark weight of device, other the second power of physical servers setting in the respectively described distributed cache system
Value;
Based on first weight and the second weight, dummy node corresponding with each physical server in Hash ring is determined
Number and position.
Further, following operation is also realized when the data cached distribution program is executed by processor:
By described on data cached the second destination virtual node being mapped on the Hash annulus;
Determine the second target physical server corresponding with the second destination virtual node, and will be described to data cached
It stores in second target physical server.
Further, following operation is also realized when the data cached distribution program is executed by processor:
Based on described to data cached historical requests number;
Described when data cached historical requests number meets preset condition, execution is described will be described to data cached
The step being mapped on the second destination virtual node on the Hash annulus.
Further, following operation is also realized when the data cached distribution program is executed by processor:
Using the first object dummy node as initial virtual node;
According to preset direction, the current virtual section that initial virtual node described in distance is nearest on the Hash annulus is obtained
Point, and obtain present physical server corresponding with the current virtual node;
Judge whether the present physical server and the first object physical server are same physical server;
When the present physical device and the target physical device are different server, using the current virtual node as
The second destination virtual node;
When the present physical server and the target physical server are same server, by the current virtual
Node is repeated as the initial virtual node according to preset direction, is obtained first described in distance on the Hash annulus
The nearest current virtual node of beginning dummy node, and obtain the step of present physical server corresponding with the current virtual node
Suddenly.
Further, following operation is also realized when the data cached distribution program is executed by processor:
By the physical server and dummy node corresponding with each physical server be added to the physical server with
In the mapping relations of dummy node;
By searching for the mapping relations, first object physical services corresponding with the first object dummy node are determined
Device, and described will store to data cached into the first object server.
In the present embodiment, by obtaining the configuration information of each physical server in distributed cache system, it is based on each object
The configuration information of reason server determines dummy node corresponding with each physical server in Hash annulus, is receiving number to be cached
According to when, to described to data cached carry out Hash operation, obtain with described being based on the Kazakhstan to data cached corresponding cryptographic Hash
Uncommon value, it is determining described to the data cached position in the Hash annulus, based on default mapping ruler, by the number to be cached
According on the first object dummy node being mapped on the Hash annulus, corresponding with the first object dummy node the is determined
One target physical server, and described will store to data cached into the first object physical server, the present invention is based on
The configuration information of each physical server is that each physical services distribute corresponding dummy node, thus the physical services for keeping performance better
Device caches more data, allows in distributed cache system in the case where each physical server performance configuration is irregular, bears
It is more balanced to carry data.
Referring to Fig. 4, Fig. 4 is the functional block diagram of the data cached distribution apparatus first embodiment of the present invention, based on described slow
Deposit data location mode proposes the first embodiment of the data cached distribution apparatus of the present invention.
In the present embodiment, the data cached distribution apparatus includes:
Module 10 is obtained, for obtaining the configuration information of each physical server in distributed cache system.
It should be noted that the configuration information can be memory capacity, process performance and the bandwidth of each physical server
Deng can be used to the information for characterizing the performance condition of each physical server.
First determining module 20 is taken for being determined in Hash annulus based on the configuration information of each physical server with each physics
The corresponding dummy node of business device.
It should be noted that consistency hash algorithm is the most common algorithm of distributed cache system data distribution, this hair
On the basis of bright scheme is the consistency hash algorithm existing based on dummy node, to existing one based on dummy node
Cause property hash algorithm improves, thus solve the problems, such as that data distribution is unbalanced in existing distributed cache system, therefore,
In order to which the implementation of this present invention program is easier to understand, existing spy corresponds to consistency hash algorithm and based on dummy node
Consistency hash algorithm is simply introduced.
Consistency hash algorithm mainly generates the one 32 hash values skies without symbol shaping by some hash function
Between, space size is [0,232-1], it is controlled by algorithm logic, so that [0,232-1] this linear space joins end to end (i.e. 0
=232), to constitute a virtual Hash annulus.The realization of consistency hash algorithm can be divided into following steps:
1) first according to some characteristic value of cache server node, for example, cache server node IP address or
MAC Address calculates its hash value as characteristic value, by hash function, is mapped to cache server further according to its hash value
In the virtual Hash annulus constructed before.
2) again the keyword key of data cached object, its hash value is calculated by identical hash function, equally
According to obtained hash value, data cached resource impact into virtual Hash annulus.
3) it finally provides, if the position of data resource mapping, with certain cache server in virtual Hash annulus
Mapping position is identical, and just data resource is saved on the cache server.Remainder data resource is suitable since the position of mapping
Hour hands are searched, the first cache server node encountered, the as cache server of this data resource.If it exceeds
232Still it can not find memory node, then default is mapped to the data resource on first cache server node.
The advantages of consistency hash algorithm, is, changes when server node number occurs in distributed cache system
When, it being capable of the smallest mapping position for influencing data resource script.But there is also following disadvantages for consistency hash algorithm.
When cache server node number is less in distributed cache system, since server buffer node passes through Hash
Later, the position on virtual annulus has randomness, and consistency hash algorithm may be since cache server node be virtual
It is unevenly distributed on Hash annulus, and causes the unbalanced problem of data distribution, influence the overall performance of distributed cache system.
Because consistency hash algorithm can have the above drawback in data distribution, in actual items application, respectively
Large enterprises are more likely to select the consistency hash algorithm based on dummy node.
It faces for consistency hash algorithm and is unevenly distributed on virtual Hash annulus because of cache server node, thus
The problem of causing data skew, the consistency hash algorithm based on dummy node pass through on the basis of consistency hash algorithm
The concept of dummy node is introduced, to solve the problems, such as to be easy to cause data skew when server node is less.It is so-called virtual
Node is exactly so that physical cache server node is no longer map to virtual Hash annulus on the basis of consistency hash algorithm
On, and be mapped on virtual Hash annulus using dummy node, and annulus is divided into several equal portions, each dummy node is only
There is a physical node to map therewith, a physical node there can be multiple dummy nodes to map therewith.
The present invention is to solve the consistency hash algorithm based on dummy node and distributing virtual section to physical server
When point, not accounting for server performance is had differences, so that the different cache server of performance all loads data about the same
Amount, causes the unbalanced problem of the data payload of distributed cache system.
The present embodiment can construct the Hash annulus based on dummy node, by multiple virtual sections before step S10 first
Point is distributed evenly on the Hash annulus, and correspondingly, step S20 is specifically included:
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of physical server
Number and position.
It is understood that this programme is the configuration information based on each physical server, the property of each physical server is determined
Can be strong and weak, it is the dummy node that the strong physical server of performance distributes greater number, the weak physical server of performance is at all less
The dummy node of number, thus keep the data volume of each physical server load and the performance power of each physical server proportional,
So that the data distribution of entire distributed cache system is more balanced.
In the following, by configuration information be memory capacity for, illustrate how the configuration information based on each physical server
Respectively each physical server determines the number of dummy node corresponding with each physical server and position in Hash ring:
Select a physical server as benchmark physical services from the physical server of the distributed cache system
Device, and the first weight is set for the benchmark physical server;
The memory capacity of each physical server in the distributed cache system is obtained, the benchmark physical services are based on
The memory capacity and benchmark weight of device, other the second power of physical servers setting in the respectively described distributed cache system
Value;
For example, the memory capacity of the benchmark physical server selected is S0, by the first power of the benchmark physical server
Value is set as W0, obtain the memory capacity S of other physical servers in the distributed cache systemi, the distributed cache system
Second weight W of middle others physical serveriIt can be calculated by following formula (1):
Wi=W0*(Si/S0) formula (1)
Based on first weight and the second weight, dummy node corresponding with each physical server in Hash ring is determined
Number and position.
Based on first weight and the second weight, according to the weight of each physical server in the distributed cache system
Size determines the number of dummy node corresponding with each physical server in Hash ring.
In this implementation, after the number that dummy node corresponding with each physical server has been determined, several there can be phase
Node with interval distributes to the same physical server, several adjacent nodes can also be distributed to the same physical services
Device, for example, being provided with 16 dummy nodes on a Hash annulus, label is respectively 1,2,3,4, until 15, passes through institute
Weight is stated, dummy node corresponding with A physical server there are 4 on determining Hash annulus, in determining and A physical server pair
When the position for the dummy node answered, dummy node corresponding with A physical server can be selected as marked as 1,5,9 and 13
Dummy node also can choose as the dummy node marked as 6,7,8 and 9.
Receiving module 30, to described to data cached carry out Hash operation, is obtained for receiving when data cached
With described to data cached corresponding cryptographic Hash.
Second determining module 40, for being based on the cryptographic Hash, determine it is described to data cached in the Hash annulus
Position.
Mapping block 50, for being mapped to described on the Hash annulus to data cached based on default mapping ruler
First object dummy node on.
It should be noted that the default mapping ruler, can will find with described wait cache to search clockwise
Data are apart from nearest dummy node as the first object dummy node, or counterclockwise search, will find with
The dummy node nearest to data cached distance is as the first object dummy node, of course, it is also possible to which there are other
Mapping ruler, the present embodiment is without restriction to this.
Memory module 60, for determining first object physical server corresponding with the first object dummy node, and
It described will store to data cached into the first object physical server.
In specific implementation, first object physical services corresponding with the first object dummy node are determined in order to improve
The efficiency of device can be in each physical server respectively determining Hash annulus and each in the configuration information based on each physical server
After the corresponding dummy node of physical server, the physical server and dummy node corresponding with each physical server are added
It adds in the mapping relations of the physical server and dummy node, by searching for the mapping relations, determines and described first
The corresponding first object physical server of destination virtual node, and described the first object service will be arrived to data cached storage
In device.
In the present embodiment, by obtaining the configuration information of each physical server in distributed cache system, it is based on each physics
The configuration information of server determines dummy node corresponding with each physical server in Hash annulus, is receiving to data cached
When, to described to data cached carry out Hash operation, obtain being based on the Hash to data cached corresponding cryptographic Hash with described
It is worth, it is determining described to the data cached position in the Hash annulus, based on default mapping ruler, to data cached by described in
It is mapped on the first object dummy node on the Hash annulus, determines and the first object dummy node corresponding first
Target physical server, and described will store to data cached into the first object physical server, the present invention is based on each
The configuration information of physical server is that each physical services distribute corresponding dummy node, thus the physical server for keeping performance better
More data are cached, are allowed in distributed cache system in the case where each physical server performance configuration is irregular, load
Data are more balanced.
It will be appreciated that each module in the data cached distribution apparatus is also used to realize each step in the above method
Suddenly, details are not described herein.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
The use of word first, second, and third does not indicate any sequence, these words can be construed to title.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal intelligent TV (can be mobile phone, calculate
Machine, server, air conditioner or network intelligence TV etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of data cached location mode, which is characterized in that the described method comprises the following steps:
Obtain the configuration information of each physical server in distributed cache system;
Dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server;
It is receiving when data cached, to described to data cached carry out Hash operation, is obtaining with described to data cached right
The cryptographic Hash answered;
Based on the cryptographic Hash, determine described to the data cached position in the Hash annulus;
Based on default mapping ruler, by described to the data cached first object dummy node being mapped on the Hash annulus
On;
Determine first object physical server corresponding with the first object dummy node, and will be described to data cached storage
Into the first object physical server.
2. the method as described in claim 1, which is characterized in that described to obtain each physical server in distributed cache system
Before configuration information, the method also includes:
Multiple dummy nodes are distributed evenly on the Hash annulus;
Correspondingly, the configuration information based on each physical server determines void corresponding with each physical server in Hash annulus
Quasi- node, specifically includes:
The number of dummy node corresponding with each physical server in Hash annulus is determined based on the configuration information of each physical server
Mesh and position.
3. method according to claim 2, which is characterized in that the configuration information is memory capacity;
Correspondingly, the configuration information based on each physical server be respectively each physical server determine in Hash annulus with it is each
The number of the corresponding dummy node of physical server and position, specifically include:
Select a physical server as benchmark physical server from the physical server of the distributed cache system, and
For the benchmark physical server, the first weight is set;
The memory capacity for obtaining each physical server in the distributed cache system, based on the benchmark physical server
Memory capacity and benchmark weight, the second weight is arranged in other physical servers in the respectively described distributed cache system;
Based on first weight and the second weight, the number of dummy node corresponding with each physical server in Hash ring is determined
With position.
4. method as claimed in claim 3, which is characterized in that the determination corresponding with the first object dummy node
One target physical server, and will it is described to data cached storage into the first object physical server after, the side
Method further include:
By described on data cached the second destination virtual node being mapped on the Hash annulus;
Determine the second target physical server corresponding with the second destination virtual node, and will be described to data cached storage
Into second target physical server.
5. method as claimed in claim 4, which is characterized in that described to be mapped to the Hash annulus to data cached for described
On the second destination virtual node on before, the method also includes:
Based on described to data cached historical requests number;
Described when data cached historical requests number meets preset condition, execution is described will be described to data cached mapping
The step on the second destination virtual node on to the Hash annulus.
6. method as claimed in claim 5, which is characterized in that described to be mapped to the Hash annulus to data cached for described
On the second destination virtual node on, specifically include:
Using the first object dummy node as initial virtual node;
According to preset direction, the current virtual node that initial virtual node described in distance is nearest on the Hash annulus is obtained, and
Obtain present physical server corresponding with the current virtual node;
Judge whether the present physical server and the first object physical server are same physical server;
When the present physical device and the target physical device are different server, using the current virtual node as described in
Second destination virtual node;
When the present physical server and the target physical server are same server, by the current virtual node
It as the initial virtual node, and repeats according to preset direction, obtains initial empty described in distance on the Hash annulus
The nearest current virtual node of quasi- node, and the step of obtaining present physical server corresponding with the current virtual node.
7. the method as described in claim 1, which is characterized in that the configuration information based on each physical server is each physics
Server determines respectively in Hash annulus after dummy node corresponding with each physical server, the method also includes:
The physical server and dummy node corresponding with each physical server are added to the physical server and virtual
In the mapping relations of node;
Correspondingly, determination first object physical server corresponding with the first object dummy node, and will it is described to
Data cached storage is specifically included into the first object physical server:
By searching for the mapping relations, first object physical server corresponding with the first object dummy node is determined,
And it described will store to data cached into the first object server.
8. a kind of data cached distribution apparatus, which is characterized in that the equipment includes: memory, processor and is stored in described
On memory and the data cached distribution program that can run on the processor, the data cached distribution program is by the place
Manage the step of realizing the data cached location mode as described in any one of claims 1 to 7 when device executes.
9. a kind of storage medium, which is characterized in that be stored with data cached distribution program, the caching number on the storage medium
The step of the data cached location mode as described in any one of claims 1 to 7 is realized when being executed by processor according to distribution program
Suddenly.
10. a kind of data cached distribution apparatus, which is characterized in that the data cached distribution apparatus includes:
Module is obtained, for obtaining the configuration information of each physical server in distributed cache system;
First determining module, for based on the configuration information of each physical server determine in Hash annulus with each physical server pair
The dummy node answered;
Receiving module, for receiving when data cached, to described to data cached carry out Hash operation, obtain with it is described
To data cached corresponding cryptographic Hash;
Second determining module determines described to the data cached position in the Hash annulus for being based on the cryptographic Hash;
Mapping block, for being based on default mapping ruler, by described to data cached first be mapped on the Hash annulus
On destination virtual node;
Memory module, for determining first object physical server corresponding with the first object dummy node, and will be described
To data cached storage into the first object physical server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910674458.8A CN110336891A (en) | 2019-07-24 | 2019-07-24 | Data cached location mode, equipment, storage medium and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910674458.8A CN110336891A (en) | 2019-07-24 | 2019-07-24 | Data cached location mode, equipment, storage medium and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110336891A true CN110336891A (en) | 2019-10-15 |
Family
ID=68147551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910674458.8A Pending CN110336891A (en) | 2019-07-24 | 2019-07-24 | Data cached location mode, equipment, storage medium and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110336891A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110830562A (en) * | 2019-10-30 | 2020-02-21 | 重庆邮电大学 | Limited load consistency Hash load balancing strategy based on virtual nodes |
CN111245924A (en) * | 2020-01-08 | 2020-06-05 | 北京松果电子有限公司 | Load balancing method and device and computer storage medium |
CN111400739A (en) * | 2020-03-20 | 2020-07-10 | 符安文 | System data transmission distribution method |
CN111475105A (en) * | 2020-03-11 | 2020-07-31 | 平安科技(深圳)有限公司 | Monitoring data storage method, device, server and storage medium |
CN111917851A (en) * | 2020-07-22 | 2020-11-10 | 电信科学技术第五研究所有限公司 | Load balancing scheduling method for realizing weighted load based on consistent hash |
CN112132683A (en) * | 2020-09-18 | 2020-12-25 | 泰康保险集团股份有限公司 | Method and device for issuing instruction, electronic equipment and storage medium |
CN112162987A (en) * | 2020-10-12 | 2021-01-01 | 北京字跳网络技术有限公司 | Data processing method, device, equipment and storage medium |
CN112380288A (en) * | 2020-11-16 | 2021-02-19 | 林亮 | Decentralized distributed data processing system |
CN112416264A (en) * | 2020-12-11 | 2021-02-26 | 中国建设银行股份有限公司 | Data storage method and device and computer storage medium |
CN112486672A (en) * | 2020-11-17 | 2021-03-12 | 中国人寿保险股份有限公司 | Service memory cache calling method and device |
CN113238836A (en) * | 2021-04-15 | 2021-08-10 | 网宿科技股份有限公司 | Distributed content scheduling method, scheduling system and central server |
CN113704308A (en) * | 2021-09-02 | 2021-11-26 | 中国联合网络通信集团有限公司 | Data caching method, device, server and recharging system |
CN114428585A (en) * | 2020-10-29 | 2022-05-03 | 北京奇艺世纪科技有限公司 | Data storage method and device and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102916811A (en) * | 2012-10-18 | 2013-02-06 | 中国科学院信息工程研究所 | Multielement entity identity certificate information storage method |
CN106020724A (en) * | 2016-05-20 | 2016-10-12 | 南京邮电大学 | Neighbor storage method based on data mapping algorithm |
CN108810041A (en) * | 2017-04-27 | 2018-11-13 | 华为技术有限公司 | A kind of data write-in of distributed cache system and expansion method, device |
CN109218438A (en) * | 2018-10-12 | 2019-01-15 | 山东科技大学 | A kind of performance optimization method of distributed cache server cluster |
CN109542330A (en) * | 2017-09-21 | 2019-03-29 | 杭州海康威视***技术有限公司 | Date storage method, data query method and device |
US20190213051A1 (en) * | 2018-01-09 | 2019-07-11 | International Business Machines Corporation | Integrating multiple distributed data processing servers with different data partitioning and routing mechanisms, resource sharing policies and lifecycles into a single process |
CN110049091A (en) * | 2019-01-10 | 2019-07-23 | 阿里巴巴集团控股有限公司 | Date storage method and device, electronic equipment, storage medium |
-
2019
- 2019-07-24 CN CN201910674458.8A patent/CN110336891A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102916811A (en) * | 2012-10-18 | 2013-02-06 | 中国科学院信息工程研究所 | Multielement entity identity certificate information storage method |
CN106020724A (en) * | 2016-05-20 | 2016-10-12 | 南京邮电大学 | Neighbor storage method based on data mapping algorithm |
CN108810041A (en) * | 2017-04-27 | 2018-11-13 | 华为技术有限公司 | A kind of data write-in of distributed cache system and expansion method, device |
CN109542330A (en) * | 2017-09-21 | 2019-03-29 | 杭州海康威视***技术有限公司 | Date storage method, data query method and device |
US20190213051A1 (en) * | 2018-01-09 | 2019-07-11 | International Business Machines Corporation | Integrating multiple distributed data processing servers with different data partitioning and routing mechanisms, resource sharing policies and lifecycles into a single process |
CN109218438A (en) * | 2018-10-12 | 2019-01-15 | 山东科技大学 | A kind of performance optimization method of distributed cache server cluster |
CN110049091A (en) * | 2019-01-10 | 2019-07-23 | 阿里巴巴集团控股有限公司 | Date storage method and device, electronic equipment, storage medium |
Non-Patent Citations (2)
Title |
---|
V MIRROKNI,M THORUP,M ZADIMOGHADDAM: ""consistent hashing with bounded loads"", 《WWW.SCIENCEOPEN.COM》 * |
王康,李东静,陈海光: ""分布式存储***中改进的一致性哈希算法"", 《计算机技术与发展》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110830562A (en) * | 2019-10-30 | 2020-02-21 | 重庆邮电大学 | Limited load consistency Hash load balancing strategy based on virtual nodes |
CN110830562B (en) * | 2019-10-30 | 2022-06-10 | 重庆邮电大学 | Limited load consistency Hash load balancing strategy based on virtual nodes |
CN111245924A (en) * | 2020-01-08 | 2020-06-05 | 北京松果电子有限公司 | Load balancing method and device and computer storage medium |
CN111475105A (en) * | 2020-03-11 | 2020-07-31 | 平安科技(深圳)有限公司 | Monitoring data storage method, device, server and storage medium |
CN111475105B (en) * | 2020-03-11 | 2024-05-03 | 平安科技(深圳)有限公司 | Monitoring data storage method, monitoring data storage device, monitoring data server and storage medium |
CN111400739A (en) * | 2020-03-20 | 2020-07-10 | 符安文 | System data transmission distribution method |
CN111917851A (en) * | 2020-07-22 | 2020-11-10 | 电信科学技术第五研究所有限公司 | Load balancing scheduling method for realizing weighted load based on consistent hash |
CN112132683A (en) * | 2020-09-18 | 2020-12-25 | 泰康保险集团股份有限公司 | Method and device for issuing instruction, electronic equipment and storage medium |
CN112162987A (en) * | 2020-10-12 | 2021-01-01 | 北京字跳网络技术有限公司 | Data processing method, device, equipment and storage medium |
CN114428585A (en) * | 2020-10-29 | 2022-05-03 | 北京奇艺世纪科技有限公司 | Data storage method and device and electronic equipment |
CN112380288A (en) * | 2020-11-16 | 2021-02-19 | 林亮 | Decentralized distributed data processing system |
CN112486672A (en) * | 2020-11-17 | 2021-03-12 | 中国人寿保险股份有限公司 | Service memory cache calling method and device |
CN112416264A (en) * | 2020-12-11 | 2021-02-26 | 中国建设银行股份有限公司 | Data storage method and device and computer storage medium |
CN113238836A (en) * | 2021-04-15 | 2021-08-10 | 网宿科技股份有限公司 | Distributed content scheduling method, scheduling system and central server |
CN113704308A (en) * | 2021-09-02 | 2021-11-26 | 中国联合网络通信集团有限公司 | Data caching method, device, server and recharging system |
CN113704308B (en) * | 2021-09-02 | 2024-03-12 | 中国联合网络通信集团有限公司 | Data caching method, device, server and recharging system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110336891A (en) | Data cached location mode, equipment, storage medium and device | |
US11431791B2 (en) | Content delivery method, virtual server management method, cloud platform, and system | |
CN110545246B (en) | Token bucket-based current limiting method, device and computer readable medium | |
US7835304B2 (en) | Method and apparatus for assigning IP addresses | |
CN108696895A (en) | Resource acquiring method, apparatus and system | |
CN110519401A (en) | Improve method, apparatus, equipment and the storage medium of network Access Success Rate | |
US20080133830A1 (en) | Efficient utilization of cache servers in mobile communication system | |
CN105512251B (en) | A kind of page cache method and device | |
CN111478857B (en) | Interface current limiting control method and device and electronic equipment | |
CN106997351B (en) | Resource cache management method, system and device | |
KR101198437B1 (en) | Method, apparatus and computer program product for providing context triggered distribution of context models | |
CN102137139A (en) | Method and device for selecting cache replacement strategy, proxy server and system | |
WO2010045330A1 (en) | Content replacement and refresh policy implementation for a content distribution network | |
CN103178989A (en) | Method and device for calculating visit hotness | |
CN103607312A (en) | Data request processing method and system for server system | |
WO2016175768A1 (en) | Map tables for hardware tables | |
CN105791381A (en) | Access control method and apparatus | |
CN109800236A (en) | Support the distributed caching method and equipment of multinode | |
CN104038520A (en) | Multi-version distributed resource management method and multi-version distributed resource management system | |
EP3739466A1 (en) | Information management device, information management method, and information management program | |
US20180097748A1 (en) | Partitioned Topic Based Queue with Automatic Processing Scaling | |
CN106790552A (en) | A kind of content providing system based on content distributing network | |
CN107071072A (en) | A kind of distributed network gate method for dealing with objects storage high concurrent request | |
US11025712B1 (en) | Load balancer employing slow start, weighted round robin target selection | |
Cohen et al. | Self-adjusting advertisement of cache indicators with bandwidth constraints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191015 |