CN107483546A - A kind of file memory method and file storage device - Google Patents

A kind of file memory method and file storage device Download PDF

Info

Publication number
CN107483546A
CN107483546A CN201710599321.1A CN201710599321A CN107483546A CN 107483546 A CN107483546 A CN 107483546A CN 201710599321 A CN201710599321 A CN 201710599321A CN 107483546 A CN107483546 A CN 107483546A
Authority
CN
China
Prior art keywords
data
demand file
request header
file
cutting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710599321.1A
Other languages
Chinese (zh)
Inventor
董宇
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Supply And Marketing Technology Co Ltd
Original Assignee
Beijing Supply And Marketing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Supply And Marketing Technology Co Ltd filed Critical Beijing Supply And Marketing Technology Co Ltd
Priority to CN201710599321.1A priority Critical patent/CN107483546A/en
Publication of CN107483546A publication Critical patent/CN107483546A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of file memory method and file storage device.The file memory method comprises the following steps:Dispensing unit, the dispensing unit are used to carry out cutting cube data;Request header acquiring unit, the request header acquiring unit is used to obtain request header data, demand file information and the cutting block message for needing to respond according to request header data acquisition, and judge locally whether be stored with the demand file data according to the request header data of acquisition;Cutting unit, obtain demand file to source station according to request header data and demand file is split, so as to which demand file data are completely or partially divided into one or more cutting cube data and stored to content delivery network cache system;Judging unit, judging unit finish for judging whether data respond.The file memory method of the application can cut into demand file one or more files of arbitrary size, so as to solve the problems, such as not caching or cache in the prior art to cause response speed slack-off.

Description

A kind of file memory method and file storage device
Technical field
The present invention relates to content delivery network cache technical field, is deposited more particularly to a kind of file memory method and file Storage device.
Background technology
The storage of big file is always that cache systems are to imperfect text the problem of having to face in cdn caching systems Part is that existing processing mode typically has two kinds under the scene asked without caching, range:
If the 1, not having this caching in caching system, range requests are passed through into upper strata source station, due to being that range please The imperfect file of data for asking source station to respond will be unable to be cached!
If the 2, not having this caching in caching system, Hui Yuan takes complete file, then asks required data to be returned range Back to client, what such a mode was responded due to upper strata is complete file so can cache, but Hui Yuanliang also please by range The size asked is amplified to the size of complete file, and needs to wait range partial datas just to carry out to access after receiving Response causes response speed slow.
Therefore, it is badly in need of at least one drawbacks described above for having a kind of technical scheme to overcome or at least mitigate prior art.
The content of the invention
It is an object of the invention to provide a kind of file memory method come overcome or at least mitigate prior art at least one Individual drawbacks described above.
To achieve the above object, this application provides a kind of file memory method, for content delivery network cache system, The file memory method comprises the following steps:Step 1:Carry out cutting cube data configuration;Step 2:Obtain request header data, root Demand file information and the cutting block message for needing to respond according to request header data acquisition, and sentenced according to the request header data of acquisition Whether disconnected local is stored with the demand file data of the request header request of data, if there is complete demand file data, terminates; If nothing, carry out in next step;Step 3:Demand file is obtained to source station and demand file is split according to request header data, Delay so as to which demand file data are completely or partially divided into one or more cutting cube data and stored to content distributing network Deposit system;Step 4:Judge whether data respond to finish, if so, then terminating;If it is not, then repeating said steps 3 are until data phase It should finish.
Preferably, the cutting cube data configuration in the step 1 is specially:Configuration segmentation block size, default cutting cube are cut Segmentation method and default segmentation block identification method.
Preferably, the acquisition request header data in the step 2 further comprise previous step:Build simultaneously query caching rope The step of drawing, the caching index are used to judge locally whether be stored with the demand file data.
Preferably, the step 2 is specially:Step 21:Judging request header data, whether the range with http agreements is asked Head, if so, then carrying out step 22;If it is not, then carry out step 23;Step 22:To need what is responded determined by request header data Demand file information is integrally as calculation and object cutting block message;Step 23:Judge that need to respond asks according to request header data The scope of fileinfo is sought, and using the initial data of the scope and terminates data as calculating cutting block message.
Preferably, the step 3 is specially:Step 31:The need segmentation scope of demand file data is obtained, it is described to need to split Scope includes first condition and second condition;Step 32:Using block size to be cut as according to real-time reception demand file data And whether the initial data for the demand file data for judging to receive is less than the second condition, if so, then carrying out in next step;If No, then the demand file data for the block size to be cut for judging to receive continue using block size to be cut as foundation for hash Demand file data are received until being judged as YES;Step 33:Judge the demand file data of the real-time reception in the step 32 Initial data whether be less than or equal to first condition, if so, then carry out step 34;If it is not, then carry out step 37;Step 34: Whether the end data for the demand file data for judging to receive are less than second condition, if so, then carrying out step 35;If it is not, then enter Row step 36;Step 35:The demand file data of the block size to be cut are obtained as cutting cube data, and repeating said steps 32;Step 36:The demand file data of the block size to be cut are obtained as cutting cube data;Step 37:It is to be cut to calculate this The skew of the initial data of the demand file data of block size, and judge the demand file number of the block size to be cut after skew According to end data whether be less than second condition, if so, then carry out step 38;If it is not, then carry out step 39;Step 38:Calculate The skew of the end data of the demand file data of the block size to be cut, and the initial data after skew is obtained to after offseting Terminate data as cutting cube data;Step 39:The demand file data of the block size to be cut are abandoned, are continued with to be cut piece Size is according to reception demand file data and repeating said steps 32.
Preferably, first condition is:(N-1)*blocksize;Wherein, N is cutting cube block number, and blocksize is each Cut block size;
Second condition is:N*(blocksize-1);Wherein, N is cutting cube block number, and blocksize is that each cutting cube is big It is small.
Present invention also provides a kind of file storage device, the file storage device includes:Dispensing unit, the configuration Unit is used to carry out cutting cube data;Request header acquiring unit, the request header acquiring unit are used to obtain request header data, root Demand file information and the cutting block message for needing to respond according to request header data acquisition, and sentenced according to the request header data of acquisition Whether disconnected local is stored with the demand file data;Cutting unit, demand file and right is obtained to source station according to request header data Demand file is split, so as to which demand file data are completely or partially divided into one or more cutting cube data and stored To content delivery network cache system;Judging unit, the judging unit finish for judging whether data respond.
Preferably, the dispensing unit includes:Block size dispensing unit is cut, the cutting block size dispensing unit is used for Configuration cuts block size;Cutting method presets unit, and the cutting method presets unit and is used to preset cutting cube cutting method;Mark Know default unit, the default unit of mark is used for default segmentation block identification method.
Preferably, the request header acquiring unit includes:Request header judging unit, the request header judging unit are used to sentence Whether the demand file data asked in disconnected request header data are stored in local;Demand file information acquisition unit, the request Fileinfo acquiring unit is used to obtain the demand file information that the needs carried in request header obtain;Cutting cube acquisition of information list Member, the cutting cube information acquisition unit are used to obtain the cutting block message that the needs carried in request header obtain.
Preferably, the request header acquiring unit further comprises:Indexing units are cached, the caching indexing units are used for Generation caches index, and the caching index is used to judge locally whether be stored with the demand file data.
The file memory method of the application can cut into demand file one or more files of arbitrary size, so as to solve The problem of causing response speed slack-off can not be cached or cache in the prior art by having determined.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the file memory method of one embodiment of the invention.
Embodiment
To make the purpose, technical scheme and advantage that the present invention is implemented clearer, below in conjunction with the embodiment of the present invention Accompanying drawing, the technical scheme in the embodiment of the present invention is further described in more detail.Described embodiment is the present invention one Section Example, rather than whole embodiments.The embodiments described below with reference to the accompanying drawings are exemplary, it is intended to is used for The present invention is explained, and is not considered as limiting the invention.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.Below Embodiments of the invention are described in detail with reference to accompanying drawing.
Fig. 1 is the schematic flow sheet of the file memory method of one embodiment of the invention.
Content delivery network cache system (Content Delivery Network) is IP based network and built, and is based on Access to content with application efficiency requirements, quality requirement and content order and the distribution and service of content are provided.
The file memory method of the application is mainly used in content delivery network cache system.
File memory method as shown in Figure 1 comprises the following steps:Step 1:Carry out cutting cube data configuration;
Step 2:Request header data are obtained, demand file information and the cutting for needing to respond according to request header data acquisition Block message, and judge the local demand file number for whether being stored with the request header request of data according to the request header data of acquisition According to if there is complete demand file data, terminating;If nothing, carry out in next step;
Step 3:Demand file is obtained to source station according to request header data and demand file is split, so as to ask File data is completely or partially divided into one or more cutting cube data and stored to content delivery network cache system;
Step 4:Judge whether data respond to finish, if so, then terminating;If it is not, then repeating said steps 3 are until data phase It should finish.
The file memory method of the application can cut into demand file one or more files of arbitrary size, so as to solve The problem of causing response speed slack-off can not be cached or cache in the prior art by having determined.
In the present embodiment, the cutting cube data configuration in step 1 is specially:Configuration segmentation block size, default segmentation block Dividing method and default segmentation block identification method.For example, the size of each cutting cube is set as 512k.It is appreciated that Be, the size of cutting cube can sets itself according to their needs, for example, could be arranged to 128K, 256K etc..
In addition, for each cutting cube, sign should be provided with, that is, is provided with the marker method of segmentation block.Citing comes Say, for a demand file, the demand file is divided into n blocks, then can be arranged to by preparation:(n*block_size)- (n+1) * block_size-1 scopes make segmentation block sign.Segmentation block sign can make as the caching index in caching key With.It is understood that be between each segmentation block it is separate, when in use, it is believed that each segmentation block is list Solely existing file.
In the present embodiment, demand file data are obtained and further comprises previous step:Build and query caching key Step, the caching are indexed for judging locally whether be stored with the demand file data.File needs to be delayed in caching system Leave, it is all that file part segmentation block is buffered and partial block does not cache also to be present, now, Need locally whether there is caching to get off Target Segmentation block according to caching index search.
In the present embodiment, step 2 is specially:
Step 21:Judge request header data whether the range request headers with http agreements, if so, then carry out step 22;If It is no, then carry out step 23;
Step 22:Demand file information to need to respond determined by request header data is integrally cut as calculation and object Block message;
Step 23:The scope for the demand file information for needing to respond according to the judgement of request header data, and rising with the scope Beginning data and the data that terminate, which are used as, calculates cutting block message.
In the present embodiment, the step 3 is specially:
Step 31:The need segmentation scope of demand file data is obtained, the scope that need to split includes first condition and the Two conditions;
Step 32:The demand file number received as foundation real-time reception demand file data and is judged using block size to be cut According to initial data whether be less than the second condition, if so, then carry out in next step;If it is not, then judge to be cut piece received The demand file data of size are hash, continue to receive demand file data using block size to be cut as foundation until judging It is yes;
Step 33:Judge whether the initial data of the demand file data of the real-time reception in the step 32 is less than or waits In first condition, if so, then carrying out step 34;If it is not, then carry out step 37;
Step 34:Whether the end data for the demand file data for judging to receive are less than second condition, if so, then being walked Rapid 35;If it is not, then carry out step 36;
Step 35:The demand file data of the block size to be cut are obtained as cutting cube data, and repeating said steps 32;
Step 36:The demand file data of the block size to be cut are obtained as cutting cube data;
Step 37:The skew of the initial data of the demand file data of the block size to be cut is calculated, and after judging skew The end data of demand file data of the block size to be cut whether be less than second condition, if so, then carrying out step 38;If It is no, then carry out step 39;
Step 38:The skew of the end data of the demand file data of the block size to be cut is calculated, and after obtaining skew Initial data to offset after end data as cutting cube data;
Step 39:The demand file data of the block size to be cut are abandoned, are continued using block size to be cut as according to reception Demand file data and repeating said steps 32.
Specifically, in the present embodiment, first condition is:(N-1)*blocksize;Wherein, N is cutting cube block number, Blocksize is each cutting block size;
Second condition is:N*(blocksize-1);Wherein, N is cutting cube block number, and blocksize is that each cutting cube is big It is small.
Present invention also provides a kind of file storage device, the file storage device includes dispensing unit, request header obtains Unit, cutting unit and judging unit are taken, wherein, dispensing unit is used to carry out cutting cube data;Request header acquiring unit is used In obtaining request header data, the demand file information and cutting block message that are responded according to request header data acquisition needs, and root Judge locally whether be stored with the demand file data according to the request header data of acquisition;Being obtained according to request header data to source station please Seek file and demand file is split, so as to which demand file data are completely or partially divided into one or more cutting cubes Data are simultaneously stored to content delivery network cache system;Judging unit finishes for judging whether data respond.
In the present embodiment, dispensing unit includes:
Block size dispensing unit is cut, the cutting block size dispensing unit is used for configuration cuts block size;
Cutting method presets unit, and the cutting method presets unit and is used to preset cutting cube cutting method;
The default unit of mark, the default unit of mark are used for default segmentation block identification method.
In the present embodiment, request header acquiring unit includes:
Request header judging unit, the request header judging unit are used to judge the demand file number asked in request header data According to whether being stored in local;
Demand file information acquisition unit, the demand file information acquisition unit are used to obtain the need carried in request header The demand file information to be obtained;
Cutting cube information acquisition unit, the cutting cube information acquisition unit obtain for obtaining the needs carried in request header The cutting block message taken.
In the present embodiment, request header acquiring unit further comprises:
Indexing units are cached, the caching indexing units are used to generate caching index, and the caching index is used to judge this Whether ground is stored with the demand file data.
It is last it is to be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations.To the greatest extent The present invention is described in detail with reference to the foregoing embodiments for pipe, it will be understood by those within the art that:It is still Technical scheme described in foregoing embodiments can be modified, or which part technical characteristic is equally replaced Change;And these modifications or replacement, the essence of appropriate technical solution is departed from the essence of various embodiments of the present invention technical scheme God and scope.

Claims (10)

  1. A kind of 1. file memory method, for content delivery network cache system, it is characterised in that the file memory method bag Include following steps:
    Step 1:Carry out cutting cube data configuration;
    Step 2:Request header data are obtained, demand file information and the cutting cube letter for needing to respond according to request header data acquisition Breath, and judge the local demand file data for whether being stored with the request header request of data according to the request header data of acquisition, if There are complete demand file data, then terminate;If nothing, carry out in next step;
    Step 3:Obtained and demand file and demand file split to source station according to request header data, so as to by demand file Data are completely or partially divided into one or more cutting cube data and stored to content delivery network cache system;
    Step 4:Judge whether data respond to finish, if so, then terminating;If it is not, then repeating said steps 3 are until data are accordingly complete Finish.
  2. 2. file memory method as claimed in claim 1, it is characterised in that the cutting cube data configuration tool in the step 1 Body is:Configuration segmentation block size, default cutting cube cutting method and default segmentation block identification method.
  3. 3. file memory method as claimed in claim 2, it is characterised in that the acquisition request header data in the step 2 are entered One step includes previous step:The step of building simultaneously query caching index, the caching index are used to judge locally whether be stored with The demand file data.
  4. 4. file memory method as claimed in claim 1, it is characterised in that the step 2 is specially:
    Step 21:Judge request header data whether the range request headers with http agreements, if so, then carry out step 22;If it is not, Then carry out step 23;
    Step 22:Demand file information to need to respond determined by request header data is integrally believed as calculation and object cutting cube Breath;
    Step 23:The scope for the demand file information for needing to respond according to the judgement of request header data, and with the starting number of the scope According to this and terminate data as calculating cutting block message.
  5. 5. file memory method as claimed in claim 4, it is characterised in that the step 3 is specially:
    Step 31:The need segmentation scope of demand file data is obtained, the scope that need to split includes first condition and Article 2 Part;
    Step 32:The demand file data of receiving as foundation real-time reception demand file data and are judged using block size to be cut Whether initial data is less than the second condition, if so, then carrying out in next step;If it is not, then judge the block size to be cut received Demand file data be hash, it is according to receiving demand file data until be judged as to continue using block size to be cut It is;
    Step 33:Judge whether the initial data of the demand file data of the real-time reception in the step 32 is less than or equal to the One condition, if so, then carrying out step 34;If it is not, then carry out step 37;
    Step 34:Whether the end data for the demand file data for judging to receive are less than second condition, if so, then carrying out step 35;If it is not, then carry out step 36;
    Step 35:The demand file data of the block size to be cut are obtained as cutting cube data, and repeating said steps 32;
    Step 36:The demand file data of the block size to be cut are obtained as cutting cube data;
    Step 37:The skew of the initial data of the demand file data of the block size to be cut is calculated, and judges being somebody's turn to do after skew Whether the end data of the demand file data of block size to be cut are less than second condition, if so, then carrying out step 38;If it is not, Then carry out step 39;
    Step 38:The skew of the end data of the demand file data of the block size to be cut is calculated, and obtains rising after skew End data of the beginning data to after offseting are as cutting cube data;
    Step 39:The demand file data of the block size to be cut are abandoned, are continued using block size to be cut as according to reception request File data and repeating said steps 32.
  6. 6. file memory method as claimed in claim 5, it is characterised in that
    First condition is:(N-1)*blocksize;Wherein, N is cutting cube block number, and blocksize is each cutting block size;
    Second condition is:N*(blocksize-1);Wherein, N is cutting cube block number, and blocksize is each cutting block size.
  7. 7. a kind of file storage device, it is characterised in that the file storage device includes:
    Dispensing unit, the dispensing unit are used to carry out cutting cube data;
    Request header acquiring unit, the request header acquiring unit is used to obtain request header data, according to request header data acquisition need The demand file information and cutting block message to be responded, and judge locally whether be stored with this according to the request header data of acquisition Demand file data;
    Cutting unit, obtain demand file to source station according to request header data and demand file is split, so as to ask File data is completely or partially divided into one or more cutting cube data and stored to content delivery network cache system;
    Judging unit, the judging unit finish for judging whether data respond.
  8. 8. file storage device as claimed in claim 7, it is characterised in that the dispensing unit includes:
    Block size dispensing unit is cut, the cutting block size dispensing unit is used for configuration cuts block size;
    Cutting method presets unit, and the cutting method presets unit and is used to preset cutting cube cutting method;
    The default unit of mark, the default unit of mark are used for default segmentation block identification method.
  9. 9. file storage device as claimed in claim 8, it is characterised in that the request header acquiring unit includes:
    Request header judging unit, the demand file data that the request header judging unit is used to judge to ask in request header data are It is no to be stored in local;
    Demand file information acquisition unit, the demand file information acquisition unit obtain for obtaining the needs carried in request header The demand file information taken;
    Cutting cube information acquisition unit, the cutting cube information acquisition unit are used to obtain what the needs carried in request header obtained Cut block message.
  10. 10. file storage device as claimed in claim 9, it is characterised in that the request header acquiring unit further comprises:
    Cache indexing units, the caching indexing units are used to generating caching index, the caching index be used to judging local be It is no to be stored with the demand file data.
CN201710599321.1A 2017-07-21 2017-07-21 A kind of file memory method and file storage device Pending CN107483546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710599321.1A CN107483546A (en) 2017-07-21 2017-07-21 A kind of file memory method and file storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710599321.1A CN107483546A (en) 2017-07-21 2017-07-21 A kind of file memory method and file storage device

Publications (1)

Publication Number Publication Date
CN107483546A true CN107483546A (en) 2017-12-15

Family

ID=60595343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710599321.1A Pending CN107483546A (en) 2017-07-21 2017-07-21 A kind of file memory method and file storage device

Country Status (1)

Country Link
CN (1) CN107483546A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100094974A1 (en) * 2008-10-15 2010-04-15 Patentvc Ltd. Load-balancing an asymmetrical distributed erasure-coded system
CN102883187A (en) * 2012-09-17 2013-01-16 华为技术有限公司 Time-shift program service method, equipment and system
CN103813185A (en) * 2014-01-26 2014-05-21 中兴通讯股份有限公司 Method, server and client for quickly distributing segmented programs
CN104506493A (en) * 2014-12-04 2015-04-08 武汉市烽视威科技有限公司 HLS content source returning and caching realization method
CN105791366A (en) * 2014-12-26 2016-07-20 中国电信股份有限公司 Large file HTTP-Range downloading method, cache server and system
CN105812833A (en) * 2016-04-07 2016-07-27 网宿科技股份有限公司 File processing method and device
CN105978936A (en) * 2016-04-25 2016-09-28 乐视控股(北京)有限公司 CDN server and data caching method thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100094974A1 (en) * 2008-10-15 2010-04-15 Patentvc Ltd. Load-balancing an asymmetrical distributed erasure-coded system
CN102883187A (en) * 2012-09-17 2013-01-16 华为技术有限公司 Time-shift program service method, equipment and system
CN103813185A (en) * 2014-01-26 2014-05-21 中兴通讯股份有限公司 Method, server and client for quickly distributing segmented programs
CN104506493A (en) * 2014-12-04 2015-04-08 武汉市烽视威科技有限公司 HLS content source returning and caching realization method
CN105791366A (en) * 2014-12-26 2016-07-20 中国电信股份有限公司 Large file HTTP-Range downloading method, cache server and system
CN105812833A (en) * 2016-04-07 2016-07-27 网宿科技股份有限公司 File processing method and device
CN105978936A (en) * 2016-04-25 2016-09-28 乐视控股(北京)有限公司 CDN server and data caching method thereof

Similar Documents

Publication Publication Date Title
CN101431539B (en) Domain name resolution method, system and apparatus
CN106888270B (en) Method and system for back source routing scheduling
CN103812849B (en) A kind of local cache update method, system, client and server
CN104519130B (en) A kind of data sharing caching method across IDC
CN103024085B (en) A kind of system and method processing the request of P2P node
CN105162900B (en) A kind of domain name mapping of multi-node collaboration and caching method and system
US20130198341A1 (en) System and method for delivering segmented content
WO2002101988A3 (en) Method and system for efficient distribution of network event data
CN104284201A (en) Video content processing method and device
WO2003083597A3 (en) Collapsed distributed cooperative memory for interactive and scalable media-on-demand systems
CN105516284B (en) A kind of method and apparatus of Cluster Database distributed storage
CN103227826A (en) Method and device for transferring file
CN104967651A (en) Data push, storage and downloading methods and devices based on CDN architecture
CN106453460A (en) File distributing method, apparatus and system
CN107888870A (en) A kind of monitoring method, apparatus and system
CN102497387A (en) Flash video distribution method based on P2P client terminal state analysis
CN105978936A (en) CDN server and data caching method thereof
CN107483546A (en) A kind of file memory method and file storage device
CN103457976B (en) Data download method and system
WO2021074932A3 (en) System and method for real-time delivery of a target content in a streaming content
CN104883381B (en) The data access method and system of distributed storage
CN114978992B (en) Communication method, node and network of safety naming data network
CN110109871A (en) A kind of cross-site high-energy physics data access method and system
CN103312816B (en) A kind of message distributing method and equipment
CN106060100A (en) Distributed cloud storage server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171215

RJ01 Rejection of invention patent application after publication