CN104052824A - Distributed cache method and system - Google Patents

Distributed cache method and system Download PDF

Info

Publication number
CN104052824A
CN104052824A CN201410317772.8A CN201410317772A CN104052824A CN 104052824 A CN104052824 A CN 104052824A CN 201410317772 A CN201410317772 A CN 201410317772A CN 104052824 A CN104052824 A CN 104052824A
Authority
CN
China
Prior art keywords
file
buffer memory
hdfs
server end
service unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410317772.8A
Other languages
Chinese (zh)
Other versions
CN104052824B (en
Inventor
何震宇
张高伟
李鑫
李义
陈明明
刘伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201410317772.8A priority Critical patent/CN104052824B/en
Publication of CN104052824A publication Critical patent/CN104052824A/en
Application granted granted Critical
Publication of CN104052824B publication Critical patent/CN104052824B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a distributed cache method and system. The distributed cache method includes the steps that firstly, HDFS data are obtained, file content stored in an HDFS is read and loaded to the shared memory through an API of the HDFS; secondly, the shared memory is managed, and a dynamic client terminal cache library and a server terminal cache service unit are connected in the shared memory management step; thirdly, interactive control is conducted, interaction work of the same host, completing remote interaction with a ZooKeeper server, calculating a file people want to obtain and positioning the file to the cache. The distributed cache method and system have the advantages that files on the HDFS are cached, the files in the cache are directly read in the file reading process, so that the file reading time is greatly shortened, the throughout of a real-time cloud service is improved, and the response time of the cloud service is reduced.

Description

Distributed caching method and system
Technical field
The present invention relates to internet data cache field, relate in particular to distributed caching method and system.
Background technology
The develop rapidly of the Internet, has brought large data age, thereby has occurred various cloud computing services, the storage of mass data and obtain and become a kind of important research direction.Wherein, main scheme is exactly that file is stored in distributed file system, and in this process, in the application of processing in real time for needs, the performance of obtaining of file has just become a huge challenge.
Apache Hadoop is one can carry out to mass data the open source software framework of distributed treatment.
The authorization of increasing income of Hadoop is Apache License2.0.
Hadoop technology is distributed system architecture, mainly comprise Map/Reduce and HDFS, wherein HDFS is the system of an Error Tolerance, be applicable to being deployed on cheap machine, HDFS can provide the data access of high-throughput, be applicable to very much the application of large-scale data, be commonly used to preserve the file of magnanimity, but lack the support to real-time application.In HDFS, file reading needs a lot of nodes to connect mutual process, and this process can increase the time that file reads greatly.
Summary of the invention
In order to solve the problems of the prior art, the invention provides a kind of distributed caching method.
The invention provides a kind of distributed caching method, it is characterized in that, this distributed caching method comprises client dynamic buffering storehouse and a plurality of server end buffer memory service unit, and this distributed caching method is structured on HDFS, at server end buffer memory service unit, comprises execution following steps:
HDFS data acquisition step: the API by HDFS reads out the file content of the upper storage of HDFS to be loaded in the middle of shared drive;
Shared-memory management step: for connecting client dynamic buffering storehouse and server end buffer memory service unit;
In cloud service application end, carry out following steps:
The mutual step of controlling: for completing the interworking of same main frame, for completing remote interaction with ZooKeeper server, for calculating the file going for and navigating to buffer memory.
As a further improvement on the present invention, this distributed caching method also comprises the sequence number snapshot step operating on server end buffer memory service unit, in described sequence number snapshot step, for periodicity, metadata and all cache files are write to the file system of local operation system, form a series of snapshot.
As a further improvement on the present invention, in described shared-memory management step, by the file content obtaining in HDFS data acquisition step, be loaded in the middle of distributed document buffer memory, for client, read the model that provide write-once, repeatedly reads.
As a further improvement on the present invention, in cloud service application end, also comprise and carry out client shared drive obtaining step, in described client shared drive obtaining step, by with shared-memory management step read alternately required file content.
As a further improvement on the present invention, in described shared-memory management step, shared drive has been divided into from individual fixing page, and page is minimum Memory Allocation unit, every page finally leave 4 lower one pages of storing same file, all pages are the form tissues with chained list; When client is obtained a file, when buffer memory does not hit, server end buffer memory service unit will from other server end buffer memory service unit or the local express of HDFS according to locating file; When server end buffer memory service unit load document content arrives shared drive, if shared drive has not had enough pages to distribute, server end buffer memory service unit can be used lru algorithm to go to remove in metadata information figure a period of time recently not have the cache contents of use; When processing lru algorithm, the file removing is placed in local disk, if local disk does not have enough spaces, lru algorithm is applied to equally local disk and removes nearest a period of time on local disk and there is no the file of use; Each cache file has a plurality of copies.
The present invention also provides a kind of distributed cache system, it is characterized in that, this distributed cache system comprises client dynamic buffering storehouse and a plurality of server end buffer memory service unit, and this distributed cache system is structured on HDFS, at server end buffer memory service unit, comprises:
HDFS data acquisition module: the API by HDFS reads out the file content of the upper storage of HDFS to be loaded in the middle of shared drive;
Shared-memory management module: for connecting client dynamic buffering storehouse and server end buffer memory service unit;
Client dynamic buffering storehouse comprises:
The mutual control module of client: for completing the interworking of same main frame, for completing remote interaction with ZooKeeper server, for calculating the file going for and navigating to buffer memory.
As a further improvement on the present invention, this distributed cache system also comprises the sequence number snapshot module operating on server end buffer memory service unit, in described sequence number snapshot module, for periodicity, metadata and all cache files are write to the file system of local operation system, form a series of snapshot.
As a further improvement on the present invention, in described shared-memory management module, by the file content obtaining in HDFS data acquisition step, be loaded in the middle of distributed document buffer memory, for client, read the model that provide write-once, repeatedly reads.
As a further improvement on the present invention, in client dynamic buffering storehouse, also comprise and carry out client shared drive acquisition module, in described client shared drive acquisition module, by with shared memory management module read alternately required file content.
As a further improvement on the present invention, in described shared-memory management module, shared drive has been divided into from individual fixing page, and page is minimum Memory Allocation unit, every page finally leave 4 lower one pages of storing same file, all pages are the form tissues with chained list; When client is obtained a file, when buffer memory does not hit, server end buffer memory service unit will from other server end buffer memory service unit or the local express of HDFS according to locating file; When server end buffer memory service unit load document content arrives shared drive, if shared drive has not had enough pages to distribute, server end buffer memory service unit can be used lru algorithm to go to remove in metadata information figure a period of time recently not have the cache contents of use; When processing lru algorithm, the file removing is placed in local disk, if local disk does not have enough spaces, lru algorithm is applied to equally local disk and removes nearest a period of time on local disk and there is no the file of use; Each cache file has a plurality of copies.
The invention has the beneficial effects as follows: distributed caching method of the present invention and system are carried out buffer memory by the file on HDFS, the process of carrying out file and reading, directly read the file in buffer memory, will reduce greatly the time that file reads like this, thereby improve the throughput of real-time cloud service and the response time of reducing cloud service.
Accompanying drawing explanation
Fig. 1 is logic schematic diagram of the present invention;
Fig. 2 is system architecture diagram of the present invention;
Fig. 3 is shared-memory management module diagram of the present invention;
Fig. 4 is that file on loading of the present invention HDFS is to distributed caching algorithm schematic diagram;
Fig. 5 is that distributed caching replica node of the present invention is selected schematic diagram.
Embodiment
As shown in Figure 1, the invention discloses a kind of distributed caching method, this distributed caching method comprises client dynamic buffering storehouse and a plurality of server end buffer memory service unit, and this distributed caching method is structured on HDFS, at server end buffer memory service unit, comprises execution following steps:
HDFS data acquisition step: the API by HDFS reads out the file content of the upper storage of HDFS to be loaded in the middle of shared drive;
Shared-memory management step: for connecting client dynamic buffering storehouse and server end buffer memory service unit;
In cloud service application end, carry out following steps:
The mutual step of controlling: for completing the interworking of same main frame, for completing remote interaction with ZooKeeper server, for calculating the file going for and navigating to buffer memory.
This distributed caching method also comprises the sequence number snapshot step operating on server end buffer memory service unit, in described sequence number snapshot step, for periodicity, metadata and all cache files are write to the file system of local operation system, form a series of snapshot.
In described shared-memory management step, by the file content obtaining in HDFS data acquisition step, be loaded in the middle of distributed document buffer memory, for client, read the model that provide write-once, repeatedly reads.
In cloud service application end, also comprise and carry out client shared drive obtaining step, in described client shared drive obtaining step, by with shared-memory management step read alternately required file content.
In described shared-memory management step, shared drive has been divided into from individual fixing page, and page is minimum Memory Allocation unit, every page finally leave 4 lower one pages of storing same file, all pages are the form tissues with chained list; When client is obtained a file, when buffer memory does not hit, server end buffer memory service unit will from other server end buffer memory service unit or the local express of HDFS according to locating file; When server end buffer memory service unit load document content arrives shared drive, if shared drive has not had enough pages to distribute, server end buffer memory service unit can be used lru algorithm to go to remove in metadata information figure a period of time recently not have the cache contents of use; When processing lru algorithm, the file removing is placed in local disk, if local disk does not have enough spaces, lru algorithm is applied to equally local disk and removes nearest a period of time on local disk and there is no the file of use; Each cache file has a plurality of copies.
As shown in Figure 1, the invention also discloses a kind of distributed cache system, be structured on HDFS, comprise a distributed caching cluster, distributed caching cluster used adopts Hadoop, Zookeeper, and Memcached is as basic framework;
This distributed cache system is C/S framework, has a client dynamic buffering storehouse and a plurality of server end buffer memory service unit.
A client dynamic buffering storehouse is provided on application layer, and this client dynamic buffering storehouse comprises two main parts: interaction controlling portion is divided with shared drive and obtained part.
As shown in Figure 2, at server end buffer memory service unit, comprise:
HDFS data acquisition module: the API by HDFS reads out the file content of the upper storage of HDFS to be loaded in the middle of shared drive;
Shared-memory management module: for connecting client dynamic buffering storehouse and server end buffer memory service unit; In described shared-memory management module, by the file content obtaining in HDFS data acquisition step, be loaded in the middle of distributed document buffer memory, this module stores distributed caching content, reads the model that provide write-once, repeatedly reads for client.
Client dynamic buffering storehouse comprises:
The mutual control module of client: for completing the interworking of same main frame, for completing remote interaction with ZooKeeper server, for calculating the file going for and navigating to buffer memory.For example, calculate and read the cryptographic Hash of the valency of asking and navigate in special buffer memory.
This distributed cache system also comprises the sequence number snapshot module operating on server end buffer memory service unit, in described sequence number snapshot module, for periodicity, metadata and all cache files are write to the file system of local operation system, form a series of snapshot.After a main frame is delayed machine, can rebuild by reading local file system the content of distributed caching.
In client dynamic buffering storehouse, also comprise and carry out client shared drive acquisition module, in described client shared drive acquisition module, by with shared memory management module read alternately required file content.
HDFS data acquisition module, shared-memory management module, sequence number snapshot module are the server processes of server end operation, and the mutual control module of client, client shared drive acquisition module are deployed in cloud service application end.
The present invention supposes that cluster has 5 ordinary PC, as shown in Figure 3, shared-memory management module diagram of the present invention, shared drive has been divided into a lot of fixing pages (every page of general 4KB can be reset by user).In this distributed cache system, page is minimum Memory Allocation unit, and last at every page leaves 4 lower one pages of storing same file, and all pages are with the form tissue of chained list.The first page of All Files is in the FirstP territory being stored in metadata information figure (Meta info Map), and cloud service client library locating file from caching server then directly obtains file content from shared drive.Memory bitmap data structure in Fig. 3 is that memory management is partly used for applying for and distribution free memory page.
When a cache file is opened in a cloud service application, the address of the first page of this file is recorded in client dynamic buffering storehouse, and returns to a filec descriptor.The metadata information of filec descriptor and file is relevant, and such as read/write pointer, client is read by filec descriptor or the content of written document.Because server end buffer memory service unit is supported a plurality of cloud service application operation simultaneously, so client comprises respectively a plurality of different read/write pointer.
When client is obtained a file, when buffer memory does not hit, server end buffer memory service unit will from other server end buffer memory service unit or the local express of HDFS according to locating file; When server end buffer memory service unit load document content arrives shared drive, if shared drive has not had enough pages to distribute, server end buffer memory service unit can be used lru algorithm to go to remove in metadata information figure a period of time recently not have the cache contents of use; When processing lru algorithm, the file removing is placed in local disk, if local disk does not have enough spaces, lru algorithm is applied to equally local disk and removes nearest a period of time on local disk and there is no the file of use.
As shown in Figure 4, the upper file of loading HDFS of the present invention is to distributed caching algorithm schematic diagram.HDFS has introduced the robustness that copy strengthens system, and under default situations, each file being stored on HDFS has three copies, is stored in respectively on different machines, can reduce like this impact that one of them machine machine of delaying brings.This design of the same employing of the present invention, each cache file has three copies.
The host node design single with HDFS compared, and the present invention adopts the conceptual design of DHT (Distributed Hash Table, distributed hashtable).DHT extensively uses in P2P system and cloud storage system.A kind of distributed storage scheduling strategy that Fig. 4 describes, what select is Ketama hash function, is not only because Ketama is a realization of increasing income, main is it can EQUILIBRIUM CALCULATION FOR PROCESS performance, hit rate and dispersiveness.
Cache_File algorithm described in Fig. 4 is invoked when working as the file acquisition request of buffer memory service reception to one and can not finding duplicate of the document in distributed cache system.Buffer memory service can calculation document cryptographic Hash, and find two other host node, come storage file in three nodes of distributed cache system.
As Fig. 5 demonstration be the method for consistency Hash equilibrium, the process of selecting about distributed caching replica node.As shown in Figure 5, when node A buffer memory file 1, one of them copy is stored in the local cache service of node A, as for the node of other stored copies, selects as follows:
1. second copy selected Node B, and wherein B is node A increment in the clockwise direction;
2. last copy is selected nodes X, the benefit node that nodes X is Node B clockwise on next node, if B exists.
When the buffer memory on the request arrival caching server that a file reads is served, first at the enterprising line retrieval file of distributed caching;
If do not hit, if local disk exists, buffer memory service will load local disk snapshot, then retrieves;
If buffer memory service processing program does not still find the file of needs, buffer memory service will be gone up load document by row HDFS, and the file of loading is stored in the service of notice buffer memory.
Distributed cache system of the present invention, on HDFS upper strata, is not the characteristic of revising HDFS inside, and this distributed cache system is independent of HDFS, and any change of this distributed cache system can not change the HDFS system of lower floor.
Distributed caching method of the present invention and system are carried out buffer memory by the file on HDFS, the process of carrying out file and reading, directly read the file in buffer memory, will reduce greatly the time that file reads like this, thereby improve the throughput of real-time cloud service and the response time of reducing cloud service.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace, all should be considered as belonging to protection scope of the present invention.

Claims (10)

1. a distributed caching method, it is characterized in that, this distributed caching method comprises client dynamic buffering storehouse and a plurality of server end buffer memory service unit, and this distributed caching method is structured on HDFS, at server end buffer memory service unit, comprises execution following steps:
HDFS data acquisition step: the API by HDFS reads out the file content of the upper storage of HDFS to be loaded in the middle of shared drive;
Shared-memory management step: for connecting client dynamic buffering storehouse and server end buffer memory service unit; In cloud service application end, carry out following steps:
The mutual step of controlling: for completing the interworking of same main frame, for completing remote interaction with ZooKeeper server, for calculating the file going for and navigating to buffer memory.
2. distributed caching method according to claim 1, it is characterized in that: this distributed caching method also comprises the sequence number snapshot step operating on server end buffer memory service unit, in described sequence number snapshot step, for periodicity, metadata and all cache files are write to the file system of local operation system, form a series of snapshot.
3. distributed caching method according to claim 1, it is characterized in that: in described shared-memory management step, by the file content obtaining in HDFS data acquisition step, be loaded in the middle of distributed document buffer memory, for client, read the model that provide write-once, repeatedly reads.
4. distributed caching method according to claim 3, it is characterized in that: in cloud service application end, also comprise and carry out client shared drive obtaining step, in described client shared drive obtaining step, by with shared-memory management step read alternately required file content.
5. distributed caching method according to claim 2, it is characterized in that: in described shared-memory management step, shared drive has been divided into from individual fixing page, page is minimum Memory Allocation unit, every page finally leave 4 lower one pages of storing same file, all pages are the form tissues with chained list; When client is obtained a file, when buffer memory does not hit, server end buffer memory service unit will from other server end buffer memory service unit or the local express of HDFS according to locating file; When server end buffer memory service unit load document content arrives shared drive, if shared drive has not had enough pages to distribute, server end buffer memory service unit can be used lru algorithm to go to remove in metadata information figure a period of time recently not have the cache contents of use; When processing lru algorithm, the file removing is placed in local disk, if local disk does not have enough spaces, lru algorithm is applied to equally local disk and removes nearest a period of time on local disk and there is no the file of use; Each cache file has a plurality of copies.
6. a distributed cache system, is characterized in that, this distributed cache system comprises client dynamic buffering storehouse and a plurality of server end buffer memory service unit, and this distributed cache system is structured on HDFS, at server end buffer memory service unit, comprises:
HDFS data acquisition module: the API by HDFS reads out the file content of the upper storage of HDFS to be loaded in the middle of shared drive;
Shared-memory management module: for connecting client dynamic buffering storehouse and server end buffer memory service unit; Client dynamic buffering storehouse comprises:
The mutual control module of client: for completing the interworking of same main frame, for completing remote interaction with ZooKeeper server, for calculating the file going for and navigating to buffer memory.
7. distributed cache system according to claim 6, it is characterized in that: this distributed cache system also comprises the sequence number snapshot module operating on server end buffer memory service unit, in described sequence number snapshot module, for periodicity, metadata and all cache files are write to the file system of local operation system, form a series of snapshot.
8. distributed cache system according to claim 6, it is characterized in that: in described shared-memory management module, by the file content obtaining in HDFS data acquisition step, be loaded in the middle of distributed document buffer memory, for client, read the model that provide write-once, repeatedly reads.
9. distributed cache system according to claim 8, it is characterized in that: in client dynamic buffering storehouse, also comprise and carry out client shared drive acquisition module, in described client shared drive acquisition module, by with shared memory management module read alternately required file content.
10. distributed cache system according to claim 7, it is characterized in that: in described shared-memory management module, shared drive has been divided into from individual fixing page, page is minimum Memory Allocation unit, every page finally leave 4 lower one pages of storing same file, all pages are the form tissues with chained list; When client is obtained a file, when buffer memory does not hit, server end buffer memory service unit will from other server end buffer memory service unit or the local express of HDFS according to locating file; When server end buffer memory service unit load document content arrives shared drive, if shared drive has not had enough pages to distribute, server end buffer memory service unit can be used lru algorithm to go to remove in metadata information figure a period of time recently not have the cache contents of use; When processing lru algorithm, the file removing is placed in local disk, if local disk does not have enough spaces, lru algorithm is applied to equally local disk and removes nearest a period of time on local disk and there is no the file of use; Each cache file has a plurality of copies.
CN201410317772.8A 2014-07-04 2014-07-04 Distributed caching method and system Expired - Fee Related CN104052824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410317772.8A CN104052824B (en) 2014-07-04 2014-07-04 Distributed caching method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410317772.8A CN104052824B (en) 2014-07-04 2014-07-04 Distributed caching method and system

Publications (2)

Publication Number Publication Date
CN104052824A true CN104052824A (en) 2014-09-17
CN104052824B CN104052824B (en) 2017-06-23

Family

ID=51505175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410317772.8A Expired - Fee Related CN104052824B (en) 2014-07-04 2014-07-04 Distributed caching method and system

Country Status (1)

Country Link
CN (1) CN104052824B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117394A (en) * 2014-11-04 2015-12-02 合肥轩明信息科技有限公司 Application mode based on caching technology
CN105302922A (en) * 2015-11-24 2016-02-03 无锡江南计算技术研究所 Realizing method for snapshot of distributed file system
CN107396320A (en) * 2017-07-05 2017-11-24 河海大学 A kind of distributed indoor real-time location method of more detection sources based on buffer queue
CN108243170A (en) * 2016-12-27 2018-07-03 青岛融贯汇众软件有限公司 Data access system and method based on socket frames
CN111400350A (en) * 2020-03-13 2020-07-10 上海携程商务有限公司 Configuration data reading method, system, electronic device and storage medium
CN112558869A (en) * 2020-12-11 2021-03-26 北京航天世景信息技术有限公司 Remote sensing image caching method based on big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060168118A1 (en) * 2001-02-28 2006-07-27 Disksites Research And Development Ltd. Method and system for differential distributed data file storage, management and access
CN101867607A (en) * 2010-05-21 2010-10-20 北京无限立通通讯技术有限责任公司 Distributed data access method, device and system
CN102103544A (en) * 2009-12-16 2011-06-22 腾讯科技(深圳)有限公司 Method and device for realizing distributed cache
CN102387169A (en) * 2010-08-26 2012-03-21 阿里巴巴集团控股有限公司 Delete method, system and delete server for distributed cache objects
CN102577241A (en) * 2009-12-31 2012-07-11 华为技术有限公司 Method, device and system for scheduling distributed buffer resources

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060168118A1 (en) * 2001-02-28 2006-07-27 Disksites Research And Development Ltd. Method and system for differential distributed data file storage, management and access
CN102103544A (en) * 2009-12-16 2011-06-22 腾讯科技(深圳)有限公司 Method and device for realizing distributed cache
CN102577241A (en) * 2009-12-31 2012-07-11 华为技术有限公司 Method, device and system for scheduling distributed buffer resources
CN101867607A (en) * 2010-05-21 2010-10-20 北京无限立通通讯技术有限责任公司 Distributed data access method, device and system
CN102387169A (en) * 2010-08-26 2012-03-21 阿里巴巴集团控股有限公司 Delete method, system and delete server for distributed cache objects

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117394A (en) * 2014-11-04 2015-12-02 合肥轩明信息科技有限公司 Application mode based on caching technology
CN105302922A (en) * 2015-11-24 2016-02-03 无锡江南计算技术研究所 Realizing method for snapshot of distributed file system
CN105302922B (en) * 2015-11-24 2018-07-06 无锡江南计算技术研究所 A kind of distributed file system snapshot implementing method
CN108243170A (en) * 2016-12-27 2018-07-03 青岛融贯汇众软件有限公司 Data access system and method based on socket frames
CN107396320A (en) * 2017-07-05 2017-11-24 河海大学 A kind of distributed indoor real-time location method of more detection sources based on buffer queue
CN107396320B (en) * 2017-07-05 2020-02-18 河海大学 Multi-detection-source distributed indoor real-time positioning method based on cache queue
CN111400350A (en) * 2020-03-13 2020-07-10 上海携程商务有限公司 Configuration data reading method, system, electronic device and storage medium
CN111400350B (en) * 2020-03-13 2023-05-02 上海携程商务有限公司 Configuration data reading method, system, electronic device and storage medium
CN112558869A (en) * 2020-12-11 2021-03-26 北京航天世景信息技术有限公司 Remote sensing image caching method based on big data

Also Published As

Publication number Publication date
CN104052824B (en) 2017-06-23

Similar Documents

Publication Publication Date Title
Nicolae et al. BlobSeer: Bringing high throughput under heavy concurrency to Hadoop Map-Reduce applications
CN107169083B (en) Mass vehicle data storage and retrieval method and device for public security card port and electronic equipment
US10157214B1 (en) Process for data migration between document stores
CN104052824A (en) Distributed cache method and system
TW201220197A (en) for improving the safety and reliability of data storage in a virtual machine based on cloud calculation and distributed storage environment
CN103605630B (en) Virtual server system and data reading-writing method thereof
CN103678603A (en) Multi-source heterogeneous data efficient converging and storing frame system
CN103595799A (en) Method for achieving distributed shared data bank
CN104572505A (en) System and method for ensuring eventual consistency of mass data caches
US9304946B2 (en) Hardware-base accelerator for managing copy-on-write of multi-level caches utilizing block copy-on-write differential update table
Liu et al. Massive image data management using HBase and MapReduce
CN110275840A (en) Distributed process on memory interface executes and file system
CN103559247B (en) A kind of data service handling method and device
Cruz et al. A scalable file based data store for forensic analysis
CN104158863A (en) Cloud storage mechanism based on transaction-level whole-course high-speed buffer
CN103942301A (en) Distributed file system oriented to access and application of multiple data types
Islam et al. Efficient data access strategies for Hadoop and Spark on HPC cluster with heterogeneous storage
US20190243807A1 (en) Replication of data in a distributed file system using an arbiter
US11030714B2 (en) Wide key hash table for a graphics processing unit
US8630979B2 (en) Non-blocking input output based storage
CN101783814A (en) Metadata storing method for mass storage system
US9690886B1 (en) System and method for a simulation of a block storage system on an object storage system
CN104268225A (en) File system architecture for addressing in multidimensional degree of freedom, as well as generating and accessing mode thereof
CN104850548B (en) A kind of method and system for realizing big data platform input/output processing
CN105354310B (en) Map tile storage layout optimization method based on MapReduce

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170623

Termination date: 20190704

CF01 Termination of patent right due to non-payment of annual fee