CN109032794A - Cache object caching method of electronic commerce system - Google Patents

Cache object caching method of electronic commerce system Download PDF

Info

Publication number
CN109032794A
CN109032794A CN201810760912.7A CN201810760912A CN109032794A CN 109032794 A CN109032794 A CN 109032794A CN 201810760912 A CN201810760912 A CN 201810760912A CN 109032794 A CN109032794 A CN 109032794A
Authority
CN
China
Prior art keywords
memory node
data
cache
main control
commerce system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810760912.7A
Other languages
Chinese (zh)
Inventor
郑向阳
钟送来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xianyu Fansheng Information Technology Co ltd
Original Assignee
Guangzhou Xianyu Fansheng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xianyu Fansheng Information Technology Co ltd filed Critical Guangzhou Xianyu Fansheng Information Technology Co ltd
Priority to CN201810760912.7A priority Critical patent/CN109032794A/en
Publication of CN109032794A publication Critical patent/CN109032794A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a Cache object caching method of an electronic commerce system, wherein the electronic commerce system comprises a Cache and a superior application, the Cache comprises a main control server and a plurality of storage nodes, and the object caching method of the main control server comprises the following steps: maintaining parameter configuration information of storage nodes, wherein the parameter configuration information comprises a network topology relation of a cluster formed by the storage nodes, a hash value of each storage node and a load of each storage node; receiving data to be cached sent by a superior application, wherein the data to be cached comprises key value information and data information; calculating a hash value of the data to be cached, and determining the hash value and a storage node corresponding to the data to be cached according to the parameter configuration information; and sending the data to be cached to the corresponding storage node. The method can flexibly and dynamically increase the storage nodes for expansion, and has the characteristics of high capacity, high availability and high data security.

Description

A kind of Cache target cache method of e-commerce system
Technical field
The present invention relates to data cache method field, the Cache target cache method of especially a kind of e-commerce system.
Background technique
E-commerce system is the system for guaranteeing the online transaction based on e-commerce and realizing.In online transaction, Information communication is realized by digitized information communication channel, and a most important condition is that both parties must possess accordingly Information technology tool is possible to be linked up using the communicative channel based on information technology.To guarantee to pass through simultaneously Internet trades, must be requested that enterprise, tissue and consumer are connected to Internet, otherwise can not utilize Internet It trades.
Since e-commerce system online transaction amount is huge, the data volume for needing to cache is even more growing day by day, so often Using the biggish cluster-based caching system of buffer memory.But existing cluster-based caching system, as HP Ibrix cluster-based caching system, China's match N8500 cluster NAS system, Computer Department of the Chinese Academy of Science BWStor blue whale cluster-based caching system, dragon deposit scientific and technological LoongStore collection Group's caching system etc., although having very high performance, dilatation is inflexible reliable, can when handling huge data volume It is not high with property and stability, and caching may be stopped to higher level's application in the case where part physical server hardware failure Service, easily cause loss of data with it is imperfect.
Summary of the invention
In order to overcome the disadvantages mentioned above of the prior art, the object of the present invention is to provide a kind of dilatation flexibility and reliability, large capacity, The Cache target cache method of the e-commerce system of high availability and high Information Security.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of Cache target cache method of e-commerce system, the e-commerce system include caching Cache and upper Grade application, the caching Cache includes main control server and multiple memory nodes, the target cache method of the main control server The following steps are included:
(1) parameter configuration of memory node is safeguarded, the parameter configuration includes that each memory node is constituted The load capacity of the network topology of cluster, the hash value of each memory node and each memory node;
(2) receive by higher level application send to data cached, it is described to it is data cached include key value information sum number it is believed that Breath;
(3) calculate this wait for data cached hash value and according to parameter configuration determine with to data cached corresponding Hash value and memory node;
(4) it waits for this data cached to be sent to corresponding memory node.
As a further improvement of the present invention: (4) are further wrapped the step of the target cache method of the main control server Include: creation Binlog Dump thread will be sent to corresponding memory node to data cached.
As a further improvement of the present invention: the target cache method of the memory node the following steps are included:
(1) receive by main control server sends to data cached, memory node include it is multiple be used for map different key assignments The hash value of information;
(2) multiple storage examples are established, will repeat to copy in multiple storage examples to data cached with a;
(3) multiple storage examples are deployed in respectively on different physical servers;
(4) creation SQL thread is to read multiple one of them data of storage examples.
As a further improvement of the present invention: (1) further comprises the step of the target cache method of the memory node:
(1) creation I/O thread is to connect the Binlog Dump thread that main control server creates;
(2) by I/O thread read by Binlog Dump thread send to data cached.
As a further improvement of the present invention: the target cache method of the main control server further includes dynamic additions and deletions storage The method of node, this method specifically includes the following steps:
(1) according to the load capacity of each memory node in the parameter configuration, judge the load capacity of each memory node Whether in preset threshold range;
(2) if so, keeping memory node invariable number;
(3) if it is not, then increase/remove memory node, redistribute the hash value of each memory node, and update the parameter Configuration information.
As a further improvement of the present invention: (3) further comprise the step of the method for the dynamic additions and deletions memory node:
(1) memory node of the load capacity greater than the maximum value of the threshold range is judged whether there is;
(2) if so, increasing at least one memory node;
(3) memory node of the load capacity less than the minimum value of the threshold range is judged whether there is;
(4) if so, removing at least one memory node.
As a further improvement of the present invention: the target cache method of the main control server further includes described in real time monitoring The method of the operation conditions of physical server, this method specifically includes the following steps:
(1) when reading the data in the memory node, judge selection physical server whether failure;
(2) if so, reselecting other physical servers;
(3) if it is not, then reading the data in the storage example being deployed on the physical server.
Compared with prior art, the beneficial effects of the present invention are:
In the method, it can accomplish that every part to data cached different memory nodes are evenly distributed to data cached The preservation of more parts of capital, and be distributed on different physical servers, in this way when to data cached data volume increase, just Memory node can be dynamically increased flexibly to carry out dilatation;It, can be by shadow and when dynamic additions and deletions memory node The range for ringing data is preferably minimized, and can guarantee large capacity, high availability and the high data safety of the Cache of e-commerce system Property.
Specific embodiment
Now in conjunction with embodiment, the present invention is further described:
A kind of Cache target cache method of e-commerce system, the e-commerce system include caching Cache (Cache) and higher level's application, the caching Cache (Cache) include main control server and multiple memory nodes, the master control clothes Be engaged in device target cache method the following steps are included:
(1) parameter configuration of memory node is safeguarded, the parameter configuration includes that each memory node is constituted The load capacity of the network topology of cluster, the hash value of each memory node and each memory node;
(2) receive by higher level application send to data cached, it is described to it is data cached include key value information sum number it is believed that Breath;
(3) calculate this wait for data cached hash value and according to parameter configuration determine with to data cached corresponding Hash value and memory node;
(4) by this wait for it is data cached be sent to corresponding memory node, further comprise: creation Binlog Dump thread Corresponding memory node will be sent to data cached.
The target cache method of the memory node the following steps are included:
(1) receive by main control server sends to data cached, memory node include it is multiple be used for map different key assignments The hash value of information;
(2) multiple storage examples are established, will repeat to copy in multiple storage examples to data cached with a;
(3) multiple storage examples are deployed in respectively on different physical servers.
(4) creation SQL thread is to read multiple one of them data of storage examples.
The step of target cache method of above-mentioned memory node (1), further comprises:
(1) creation I/O thread is to connect the Binlog Dump thread that main control server creates;
(2) by I/O thread read by Binlog Dump thread send to data cached.
The target cache method of the main control server further includes the method for dynamic additions and deletions memory node, and this method is specifically wrapped Include following steps:
(1) according to the load capacity of each memory node in the parameter configuration, judge the load capacity of each memory node Whether in preset threshold range;
(2) if so, keeping memory node invariable number;
(3) if it is not, then increase/remove memory node, redistribute the hash value of each memory node, and update the parameter Configuration information.
The step of method of above-mentioned dynamic additions and deletions memory node (3), further comprises:
(1) memory node of the load capacity greater than the maximum value of the threshold range is judged whether there is;
(2) if so, increasing at least one memory node;
(3) memory node of the load capacity less than the minimum value of the threshold range is judged whether there is;
(4) if so, removing at least one memory node.
The target cache method of the main control server further includes monitoring the operation conditions of the physical server in real time Method, this method specifically includes the following steps:
(1) when reading the data in the memory node, judge selection physical server whether failure;
(2) if so, reselecting other physical servers;
(3) if it is not, then reading the data in the storage example being deployed on the physical server.
The preferred storage engines of the Cache applied in the method are Pigeon FlexObject and PigeonList.
Industry has been generally acknowledged that system R itself limits the handling capacity of data, directly relies on traditional relationship Type Database Systems solve the above problems if Oracle, Mysql are that comparison is difficult, while corresponding cost can be very high, in order to locate It manages huge data volume while supporting high availability, storage engines select Pigeon FlexObject and PigeonList, use Telescopic, High Availabitity, distributed structured key-value storage scheme, to solve large-scale data in data It measures in ever-increasing situation, makes service more stable, it is easier to extend;It is able to solve the critical data caching of large capacity, branch The caching of the transaction data of magnanimity, user data, user behavior data is held, and high stability may be implemented, or even Also to guarantee that the caching system does not stop servicing to higher level's application in the case where part physical server hardware failure, will not cause Loss of data with it is imperfect.
The Pigeon FlexObject is that a high-performance, persistence, distributed Key-Value data object are deposited Engine is stored up, main feature: frequently-used data is stored in memory, and readwrite performance is extremely efficient;Support JSON or XML etc. general Various format datas;Support Redo-log data persistence technology;Distribution, master-slave synchronisation, read and write abruption.
The Pigeon List is a high-performance, persistence, distributed Key-List data queue storage engines, Its main feature: supporting magnanimity List, each List that object frequently-used data of the number in terms of 1,000,000,000 can be supported to be stored in memory, reads Write performance is extremely efficient;One list is made of multiple band;One band includes multiple objects.Band uses block chain group It knits, the lookup and addition delete operation that can be exceedingly fast by dichotomy are quickly inserted into, are deleted, any object in locating list; Support any sort criteria;Support Redo-log data persistence technology;Distribution, master-slave synchronisation, read and write abruption.
Function of the invention: in the method, can accomplish different memory nodes are evenly distributed to data cached, often Part saves to more parts of data cached all meetings, and is distributed on different physical servers, in this way when to data cached data volume When increase, so that it may flexibly dynamically increase memory node to carry out dilatation;And dynamic additions and deletions memory node when It waits, the range of impacted data can be preferably minimized, can guarantee the High Availabitity sum number of the caching Cache of e-commerce system According to safety.
In conclusion after those skilled in the art read file of the present invention, according to the technique and scheme of the present invention with Technical concept is not necessarily to creative mental labour and makes other various corresponding conversion schemes, belongs to the model that the present invention is protected It encloses.

Claims (7)

1. a kind of Cache target cache method of e-commerce system, the e-commerce system includes caching Cache and higher level Using, it is characterised in that: the caching Cache includes main control server and multiple memory nodes, pair of the main control server As caching method the following steps are included:
(1) parameter configuration of memory node is safeguarded, the parameter configuration includes the cluster that each memory node is constituted Network topology, the hash value of each memory node and the load capacity of each memory node;
(2) receive by higher level application send to data cached, it is described to it is data cached include key value information and data information;
(3) calculate this wait for data cached hash value and according to parameter configuration determine with to data cached corresponding hash value And memory node;
(4) it waits for this data cached to be sent to corresponding memory node.
2. a kind of Cache target cache method of e-commerce system according to claim 1, it is characterised in that: described The step of target cache method of main control server (4) further comprises: creation Binlog Dump thread will be to data cached hair It is sent to corresponding memory node.
3. a kind of Cache target cache method of e-commerce system according to claim 1, it is characterised in that: described The target cache method of memory node the following steps are included:
(1) receive by main control server sends to data cached, memory node include it is multiple be used for map different key value informations Hash value;
(2) multiple storage examples are established, will repeat to copy in multiple storage examples to data cached with a;
(3) multiple storage examples are deployed in respectively on different physical servers;
(4) multiple one of them data of storage examples are read.
4. a kind of Cache target cache method of e-commerce system according to claim 3, it is characterised in that: described The step of target cache method of memory node (1), further comprises:
(1) creation I/O thread is to connect the Binlog Dump thread that main control server creates;
(2) by I/O thread read by Binlog Dump thread send to data cached.
5. a kind of Cache target cache method of e-commerce system according to claim 1, it is characterised in that: described The target cache method of main control server further includes the method for dynamic additions and deletions memory node, this method specifically includes the following steps:
(1) according to the load capacity of each memory node in the parameter configuration, judge each memory node load capacity whether In preset threshold range;
(2) if so, keeping memory node invariable number;
(3) if it is not, then increase/remove memory node, redistribute the hash value of each memory node, and update the parameter configuration Information.
6. a kind of Cache target cache method of e-commerce system according to claim 5, it is characterised in that: described The step of method of dynamic additions and deletions memory node (3), further comprises:
(1) memory node of the load capacity greater than the maximum value of the threshold range is judged whether there is;
(2) if so, increasing at least one memory node;
(3) memory node of the load capacity less than the minimum value of the threshold range is judged whether there is;
(4) if so, removing at least one memory node.
7. a kind of distributed data cluster-based caching system according to claim 1, it is characterised in that: the main control server Target cache method further include the method for monitoring the operation conditions of the physical server in real time, this method specifically includes following Step:
(1) when reading the data in the memory node, judge selection physical server whether failure;
(2) if so, reselecting other physical servers;
(3) if it is not, then reading the data in the storage example being deployed on the physical server.
CN201810760912.7A 2018-07-12 2018-07-12 Cache object caching method of electronic commerce system Pending CN109032794A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810760912.7A CN109032794A (en) 2018-07-12 2018-07-12 Cache object caching method of electronic commerce system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810760912.7A CN109032794A (en) 2018-07-12 2018-07-12 Cache object caching method of electronic commerce system

Publications (1)

Publication Number Publication Date
CN109032794A true CN109032794A (en) 2018-12-18

Family

ID=64641887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810760912.7A Pending CN109032794A (en) 2018-07-12 2018-07-12 Cache object caching method of electronic commerce system

Country Status (1)

Country Link
CN (1) CN109032794A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367672A (en) * 2020-03-05 2020-07-03 北京奇艺世纪科技有限公司 Data caching method and device, electronic equipment and computer storage medium
CN113468127A (en) * 2020-03-30 2021-10-01 同方威视科技江苏有限公司 Data caching method, device, medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521405A (en) * 2011-12-26 2012-06-27 中国科学院计算技术研究所 Massive structured data storage and query methods and systems supporting high-speed loading
CN104657500A (en) * 2015-03-12 2015-05-27 浪潮集团有限公司 Distributed storage method based on KEY-VALUE KEY VALUE pair
CN105516367A (en) * 2016-02-02 2016-04-20 北京百度网讯科技有限公司 Distributed data storage system, method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521405A (en) * 2011-12-26 2012-06-27 中国科学院计算技术研究所 Massive structured data storage and query methods and systems supporting high-speed loading
CN104657500A (en) * 2015-03-12 2015-05-27 浪潮集团有限公司 Distributed storage method based on KEY-VALUE KEY VALUE pair
CN105516367A (en) * 2016-02-02 2016-04-20 北京百度网讯科技有限公司 Distributed data storage system, method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111367672A (en) * 2020-03-05 2020-07-03 北京奇艺世纪科技有限公司 Data caching method and device, electronic equipment and computer storage medium
CN113468127A (en) * 2020-03-30 2021-10-01 同方威视科技江苏有限公司 Data caching method, device, medium and electronic equipment

Similar Documents

Publication Publication Date Title
US7457835B2 (en) Movement of data in a distributed database system to a storage location closest to a center of activity for the data
Annamalai et al. Sharding the shards: managing datastore locality at scale with Akkio
CN105095313B (en) A kind of data access method and equipment
CN106250226B (en) Method for scheduling task and system based on consistency hash algorithm
KR20180055952A (en) Data replication technique in database management system
CN105138679B (en) A kind of data processing system and processing method based on distributed caching
Esteves et al. Quality-of-service for consistency of data geo-replication in cloud computing
CN108664516A (en) Enquiring and optimizing method and relevant apparatus
Tang et al. Deferred lightweight indexing for log-structured key-value stores
CN103218175A (en) Multi-tenant cloud storage platform access control system
CN114490141B (en) High-concurrency IPC data interaction method based on shared memory
CN103595799A (en) Method for achieving distributed shared data bank
CN102663007A (en) Data storage and query method supporting agile development and lateral spreading
CN105159845A (en) Memory reading method
CN106713391A (en) Session information sharing method and sharing system
CN110287264A (en) Batch data update method, device and the system of distributed data base
CN110147345A (en) A kind of key assignments storage system and its working method based on RDMA
CN105975345A (en) Video frame data dynamic equilibrium memory management method based on distributed memory
CN108573029A (en) Method, device and storage medium for acquiring network access relation data
CN109032794A (en) Cache object caching method of electronic commerce system
KR20190022600A (en) Data replication technique in database management system
Vilaça et al. A correlation-aware data placement strategy for key-value stores
CN106649150A (en) Cache management method and device
CN109165096A (en) The caching of web cluster utilizes system and method
CN103365987A (en) Clustered database system and data processing method based on shared-disk framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181218