CN111526208A - High-concurrency cloud platform file transmission optimization method based on micro-service - Google Patents
High-concurrency cloud platform file transmission optimization method based on micro-service Download PDFInfo
- Publication number
- CN111526208A CN111526208A CN202010372189.2A CN202010372189A CN111526208A CN 111526208 A CN111526208 A CN 111526208A CN 202010372189 A CN202010372189 A CN 202010372189A CN 111526208 A CN111526208 A CN 111526208A
- Authority
- CN
- China
- Prior art keywords
- service
- micro
- uploading
- file
- cloud platform
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3034—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a storage system, e.g. DASD based or network based
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/51—Discovery or management thereof, e.g. service location protocol [SLP] or web services
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention relates to a high-concurrency cloud platform file transmission optimization method based on micro-services, and belongs to the technical field of computers. The method comprises the following steps: s1: a cloud platform file transmission system based on a micro-service architecture is built, and a micro-service gateway and a service registration center are used in the micro-service architecture; s2: uploading a file request to a micro service gateway, and distributing a load by a service registration center by using a dynamic routing self-adaptive algorithm based on a server load rate to call a service; s3: in a cloud platform file transmission system based on a micro-service architecture, the uploading micro-service is divided into a common uploading micro-service for storing and uploading temporary files by using a disk and an optimized uploading micro-service for storing and uploading temporary files by using TMPFS; s4: and monitoring the memory use condition of the system in real time, and determining the specific micro service forwarded by the uploading request according to the memory allowance condition when the new uploading request reaches the gateway. The method can effectively reduce the response time and improve the throughput rate.
Description
Technical Field
The invention belongs to the technical field of computers, and relates to a high-concurrency cloud platform file transmission optimization method based on micro-services.
Background
Compared with the traditional internet infrastructure, the cloud platform has more advantages, including cost reduction, real-time updating, high reliability, high expandability and the like. At present, most of cloud platforms adopt a single architecture development, the architecture is a three-level architecture, namely a front-end interface, a business logic layer and a back-end database layer, and one project engineering comprises all business functions of the single architecture. The single body has the characteristics of simple development and deployment and low cost, but along with the increase of user quantity and the continuous expansion of service quantity, under a high concurrency scene, the traditional single body framework is difficult to meet the continuously changing service requirements, so the micro-service framework is a good method for solving the problems.
The microservice architecture functionally divides a single architecture into a series of microservices, each microservice can be easily deployed and distributed to a production environment without affecting the availability of the entire system. Thus, the microservice architecture can be easily replaced and expanded. Different microservices, different programming languages and development tools may be used. In the cloud platform, the proportion of unstructured data such as documents, pictures and videos is high, and basic requirements of file storage of the cloud platform include low cost, large capacity, high reliability and high availability. The user should access the cloud platform file through the network, but in a high concurrency scenario, the remote access may involve unpredictable delay, so it is important to ensure the access performance of the cloud platform in the high concurrency scenario, and especially the uploading and downloading speed determines the use experience of the user.
Disclosure of Invention
In view of this, the present invention provides a method for optimizing file transmission of a high-concurrency cloud platform based on a micro service. The uploading processing speed is accelerated by the idle memory when the memory of the server has surplus according to the condition of the system memory, and the normal operation of the system is not influenced, so that the usability and the reliability of the operation of the system are ensured, and the file uploading speed is accelerated.
In order to achieve the purpose, the invention provides the following technical scheme:
a high concurrency cloud platform file transmission optimization method based on micro-service comprises the following steps:
s1: a cloud platform file transmission system based on a micro-service architecture is built, and a micro-service gateway and a service registration center are used in the micro-service architecture;
s2: submitting a file uploading request to a micro service gateway through a client, distributing a load by a service registration center by using a dynamic routing self-adaptive algorithm based on a server load rate, and calling a service after the micro service gateway pulls the service to the service registration center;
s3: in a cloud platform file transmission system based on a micro-service architecture, the uploading micro-service is divided into a common uploading micro-service for storing and uploading temporary files by using a disk and an optimized uploading micro-service for storing and uploading temporary files by using TMPFS;
s4: and in the operation process, the memory use condition of the system is monitored in real time, and the specific micro service forwarded by the uploading request is determined according to the memory allowance condition when the new uploading request reaches the gateway.
Optionally, in step S2, the dynamic routing adaptive algorithm specifically includes the following steps:
s21: acquiring the CPU utilization rate, the memory utilization rate, the disk utilization rate and the current server connection number as parameters for evaluating the server load;
s22: acquiring the values of the parameters again after two seconds;
s23: calculating the weight of each parameter within two seconds as the weight of each parameter when calculating the server load;
s24: according to the importance of the parameters, different weight values are given to different server load parameters to generate a load value of each server;
s25: and calculating the score of each server according to the load value, sequencing the servers according to the scores, and selecting the optimal server for file transmission.
Optionally, in the step S3, storing, by the TMPFS, the optimized upload microservice for uploading the temporary file includes the following steps:
s31: using a temporary file system TMPFS based on a memory to store the uploaded temporary file, setting a TMPFS threshold value and recording the use condition;
s32: the method comprises the steps that a file is cut into a plurality of file fragments at a request end initiated by uploading by using a file dynamic fragmentation algorithm based on a data block use mode, then a fragment uploading interface is called for each file fragment, and the file fragments are fused and restored into a complete file after uploading is completed;
s33: and in the operation process, the service condition of the TMPFS memory of the system is monitored in real time, and the specific service forwarded by the upload request is determined according to the memory allowance condition when a new upload request reaches the gateway.
Optionally, in step S32, the file dynamic slicing algorithm based on the data block usage pattern specifically includes the following steps:
s321: dividing the use condition of the file data block into a left data block use amount and a right data block use amount, wherein the use condition is represented by an integer value, the integer value indicates how many bytes are actually used by a client in the part, and a threshold value is specified to limit the size of the data block;
s322: updating corresponding usage information by adding the current data block usage to the previous data block usage, the usage being limited by a data block threshold;
s323: and calculating a numerical value by using a splitting and merging formula, and comparing the numerical value with a splitting and merging threshold value to determine whether the file blocks are better merged or better split, wherein the threshold value is a real number and ranges from 0 to 1.
Optionally, the dynamic routing adaptive algorithm calculates weights of the nodes and sorts the weights in ascending order, and then calculates a weight interval R of each service node;
then, the client generates a random number R for each request, and selects a service node according to the position of R in the weight interval R;
in order to avoid that all clients access a new node at the same time after a new service node is added, when each client updates the load information of the service node cached locally, the interval time delta t of the next access is randomly generated.
The invention has the beneficial effects that:
in a micro-service environment, the method uses TMPFS to store file fragments, and designs and realizes a dual-guarantee dynamic routing self-adaptive uploading processing strategy by monitoring the operation load of a server and setting the memory of the uploading service, thereby effectively reducing the response time and improving the throughput rate compared with a common file transmission scheme.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is an overall flowchart of a micro-service based high concurrency cloud platform file transfer optimization method;
FIG. 2 is a flow chart of a dynamic routing adaptation strategy;
fig. 3 is a flow chart of a process of fragmented upload microservice processing using TMPFS;
fig. 4 is a schematic diagram of a file dynamic fragmentation method.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Fig. 1 is an overall flowchart of a method for optimizing file transmission of a high-concurrency cloud platform based on micro-services according to the present invention, including:
step S101: a Cloud platform file transmission system based on a Spring Cloud micro-service architecture is built, and a micro-service gateway Zuul and a service registration center Eurake are used in the micro-service architecture;
step S102: submitting a file uploading request to a micro service gateway through a client, distributing a load by using a dynamic routing self-adaption strategy of comprehensive load rate and response time by a service registration center, and calling a service after the micro service gateway pulls the service to the service registration center;
step S103: in a cloud platform file transmission system based on a micro-service architecture, the uploading micro-service is divided into a common uploading micro-service for storing and uploading a temporary file by using a disk and an optimized uploading micro-service for storing and uploading the temporary file by using a TMPFS (temporary storage platform), and when the system memory allowance is sufficient, the request is selected to be forwarded to the optimized uploading micro-service for storing and uploading the temporary file by using the TMPFS; when the memory allowance of the system is not enough, selecting to forward the request to a common uploading micro-service which uses a disk to store a temporary file;
step S104: in the operation process, the TMPFS memory use condition of the system is monitored in real time, and the new uploading request is submitted to the micro-service gateway through the client to determine the specific micro-service forwarded by the uploading request.
The dynamic routing adaptive policy in step S102 is shown in fig. 2, and specifically includes:
step S201: acquiring a CPU utilization rate L (C1), a memory utilization rate L (M1), a disk utilization rate L (D1) and a current server connection number L (S1) as parameters for evaluating server load, wherein the CPU utilization rate can reflect the busy state of a server, and the monitoring process can regularly detect the CPU utilization rate and determine the load of the CPU utilization rate; the size of the memory utilization rate changes along with the operation of the system, and the monitoring process can periodically detect the physical memory utilization rate and determine the load of the physical memory utilization rate; the utilization rate of the disk can also influence the performance of the server, and the monitoring process can periodically check the read-write times of the disk within a certain time interval and determine the busy degree of the disk; the current connection of the server represents the load pressure level of the server;
step S202: acquiring the values of the parameters again after two seconds, and calculating absolute values Δ L (X) ═ L (X1) -L (X2) |, (X ═ C, M, D, S) of the differences between the two acquired parameters;
step S203, weight threshold is set to α (0)<<1) The weight of each parameter is calculated within two seconds as the weight of each parameter when calculating the server loadRemoving servers with weights above a threshold and no response;
step S204: according to the importance of the parameters, different weight values are given to different server Load parameters, and a Load value Load of each server is generated to be WC (C2) + WD (D2) + WM (M2) + WS (S2);
step S205: and calculating the score of each server through the inverse proportion of the load value, sorting the servers through the scores, and then calculating the scoring interval R of each service node. Then, the client generates a random number r for each request, and selects a service node according to the interval position where r falls;
step S206: in order to avoid that all clients access a new node at the same time after a new service node is added, when each client updates the load information of the service node cached locally, the interval time delta t of the next access is randomly generated.
The optimized upload microservice using the TMPFS to store and upload the temporary file in step S103 is shown in fig. 3, and specifically includes:
step S301: the method comprises the steps that a temporary file system TMPFS based on a memory is used for storing an uploaded temporary file, a TMPFS threshold value is set, and the total size of the memory which can be used by the temporary file is monitored;
step S302: a file is cut into a plurality of file fragments at a request end initiated by uploading by using a file dynamic fragmentation method based on a data block use mode;
step S303: after receiving an uploaded file from a network I/O (input/output), a server side judges whether the uploaded file is a fragment uploading request or a common file uploading request, and writes the file into a specified storage position after directly creating a database record by the common file uploading;
step S304: if the file fragment is uploaded, judging whether the file fragment is a first fragment, creating a new database record for the first file fragment, updating the number of fragments which are uploaded in the database if the file fragment is not the first file fragment, and writing the file fragment into a position for storing the file fragment;
step S305: and judging whether the file fragment is the last file fragment or not, if so, fusing the file fragments into a complete file to a specified position, and deleting all the file fragments. Finally, returning the uploading result information to the client;
step S306: and in the operation process, the service condition of the TMPFS memory of the system is monitored in real time, and the specific service forwarded by the upload request is determined according to the memory allowance condition when a new upload request reaches the gateway.
The method for dynamically partitioning a file based on the data block usage mode in step S302 is shown in fig. 4, and specifically includes:
step S401: dividing the usage of the file data block into a left data block usage and a right data block usage, the usage being represented by an integer value indicating how many bytes were actually used by the client in the portion;
step S402: specifying a threshold to limit the minimum block size BSminIs 0.5Mb and maximum block size BSmaxIs 5 Mb;
step S403: updating the corresponding usage information by adding the current byte usage to the previous byte usage, the usage being subject to the maximum block size BSmaxA half limit;
step S404: using a splitting formulaWherein BSLAnd BSRRespectively representing their block sizes, ULLAnd ULRIndicating the amount of use of the left data block, URLAnd URRRepresenting the amount of usage of the right data block, and a merging formulaCalculating a value, wherein ULAnd URRepresenting left and right usage of a data block, and a merge threshold TSmergeAnd a split threshold TSsplitComparing, determining whether the file block is better merged or better split, wherein TSmerge0.8 and TSsplitThese thresholds should be real numbers ranging from 0 to 1, 0.2.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (5)
1. A high concurrency cloud platform file transmission optimization method based on micro-service is characterized by comprising the following steps: the method comprises the following steps:
s1: a cloud platform file transmission system based on a micro-service architecture is built, and a micro-service gateway and a service registration center are used in the micro-service architecture;
s2: submitting a file uploading request to a micro service gateway through a client, distributing a load by a service registration center by using a dynamic routing self-adaptive algorithm based on a server load rate, and calling a service after the micro service gateway pulls the service to the service registration center;
s3: in a cloud platform file transmission system based on a micro-service architecture, the uploading micro-service is divided into a common uploading micro-service for storing and uploading temporary files by using a disk and an optimized uploading micro-service for storing and uploading temporary files by using a temporary file system TMPFS;
s4: and in the operation process, the memory use condition of the system is monitored in real time, and the specific micro service forwarded by the uploading request is determined according to the memory allowance condition when the new uploading request reaches the gateway.
2. The method for optimizing file transfer of the high-concurrency cloud platform based on the microservice according to claim 1, wherein: in step S2, the dynamic routing adaptive algorithm specifically includes the following steps:
s21: acquiring the CPU utilization rate, the memory utilization rate, the disk utilization rate and the current server connection number as parameters for evaluating the server load;
s22: acquiring the values of the parameters again after two seconds;
s23: calculating the weight of each parameter within two seconds as the weight of each parameter when calculating the server load;
s24: according to the importance of the parameters, different weight values are given to different server load parameters to generate a load value of each server;
s25: and calculating the score of each server according to the load value, sequencing the servers according to the scores, and selecting the optimal server for file transmission.
3. The method for optimizing file transfer of the high-concurrency cloud platform based on the microservice according to claim 1, wherein: in step S3, the storing, by the TMPFS, the optimized upload microservice for uploading the temporary file specifically includes the following steps:
s31: using a temporary file system TMPFS based on a memory to store the uploaded temporary file, setting a TMPFS threshold value and recording the use condition;
s32: the method comprises the steps that a file is cut into a plurality of file fragments at a request end initiated by uploading by using a file dynamic fragmentation algorithm based on a data block use mode, then a fragment uploading interface is called for each file fragment, and the file fragments are fused and restored into a complete file after uploading is completed;
s33: and in the operation process, the service condition of the TMPFS memory of the system is monitored in real time, and the specific service forwarded by the upload request is determined according to the memory allowance condition when a new upload request reaches the gateway.
4. The method for optimizing file transfer of the high-concurrency cloud platform based on the micro-service according to claim 3, wherein: in step S32, the file dynamic slicing algorithm based on the data block usage pattern specifically includes the following steps:
s321: dividing the use condition of the file data block into a left data block use amount and a right data block use amount, wherein the use condition is represented by an integer value, the integer value indicates how many bytes are actually used by a client in the part, and a threshold value is specified to limit the size of the data block;
s322: updating corresponding usage information by adding the current data block usage to the previous data block usage, the usage being limited by a data block threshold;
s323: and calculating a numerical value by using a splitting and merging formula, and comparing the numerical value with a splitting and merging threshold value to determine whether the file blocks are better merged or better split, wherein the threshold value is a real number and ranges from 0 to 1.
5. The method for optimizing file transfer of the high-concurrency cloud platform based on the microservice according to claim 2, wherein: the dynamic routing self-adaptive algorithm calculates the weight of each node and sorts the weight according to ascending order, and then calculates the weight interval R of each service node;
then, the client generates a random number R for each request, and selects a service node according to the position of R in the weight interval R;
in order to avoid that all clients access a new node at the same time after a new service node is added, when each client updates the load information of the service node cached locally, the interval time delta t of the next access is randomly generated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010372189.2A CN111526208A (en) | 2020-05-06 | 2020-05-06 | High-concurrency cloud platform file transmission optimization method based on micro-service |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010372189.2A CN111526208A (en) | 2020-05-06 | 2020-05-06 | High-concurrency cloud platform file transmission optimization method based on micro-service |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111526208A true CN111526208A (en) | 2020-08-11 |
Family
ID=71907051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010372189.2A Pending CN111526208A (en) | 2020-05-06 | 2020-05-06 | High-concurrency cloud platform file transmission optimization method based on micro-service |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111526208A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112311902A (en) * | 2020-12-23 | 2021-02-02 | 深圳市蓝凌软件股份有限公司 | File sending method and device based on micro-service |
CN112466283A (en) * | 2020-10-30 | 2021-03-09 | 北京仿真中心 | Collaborative software voice recognition system |
CN112685261A (en) * | 2021-01-05 | 2021-04-20 | 武汉长江通信智联技术有限公司 | Micro-service operation state monitoring method based on observer mode |
CN115208874A (en) * | 2022-07-15 | 2022-10-18 | 北银金融科技有限责任公司 | Multi-communication-protocol distributed file processing platform based on bank core |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110149395A (en) * | 2019-05-20 | 2019-08-20 | 华南理工大学 | One kind is based on dynamic load balancing method in the case of mass small documents high concurrent |
-
2020
- 2020-05-06 CN CN202010372189.2A patent/CN111526208A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110149395A (en) * | 2019-05-20 | 2019-08-20 | 华南理工大学 | One kind is based on dynamic load balancing method in the case of mass small documents high concurrent |
Non-Patent Citations (1)
Title |
---|
郭超: ""基于微服务的企业存储***的访问控制与上传优化研究"", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112466283A (en) * | 2020-10-30 | 2021-03-09 | 北京仿真中心 | Collaborative software voice recognition system |
CN112466283B (en) * | 2020-10-30 | 2023-12-01 | 北京仿真中心 | Cooperative software voice recognition system |
CN112311902A (en) * | 2020-12-23 | 2021-02-02 | 深圳市蓝凌软件股份有限公司 | File sending method and device based on micro-service |
CN112311902B (en) * | 2020-12-23 | 2021-05-07 | 深圳市蓝凌软件股份有限公司 | File sending method and device based on micro-service |
CN112685261A (en) * | 2021-01-05 | 2021-04-20 | 武汉长江通信智联技术有限公司 | Micro-service operation state monitoring method based on observer mode |
CN115208874A (en) * | 2022-07-15 | 2022-10-18 | 北银金融科技有限责任公司 | Multi-communication-protocol distributed file processing platform based on bank core |
CN115208874B (en) * | 2022-07-15 | 2024-03-29 | 北银金融科技有限责任公司 | Multi-communication protocol distributed file processing platform based on bank core |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111526208A (en) | High-concurrency cloud platform file transmission optimization method based on micro-service | |
CN101133622B (en) | Splitting a workload of a node | |
CN101562543B (en) | Cache data processing method and processing system and device thereof | |
KR101752928B1 (en) | Swarm-based synchronization over a network of object stores | |
CN111680050A (en) | Fragmentation processing method, device and storage medium for alliance link data | |
WO2018201103A1 (en) | Iterative object scanning for information lifecycle management | |
CN107817947B (en) | Data storage method, device and system | |
WO2007088081A1 (en) | Efficient data management in a cluster file system | |
CN103227818A (en) | Terminal, server, file transferring method, file storage management system and file storage management method | |
CN111258980B (en) | Dynamic file placement method based on combined prediction in cloud storage system | |
JP2014044677A (en) | Transmission control program, communication node, and transmission control method | |
CN106326239A (en) | Distributed file system and file meta-information management method thereof | |
CN113742135B (en) | Data backup method, device and computer readable storage medium | |
CN115189908B (en) | Random attack survivability evaluation method based on network digital twin | |
CN106973091B (en) | Distributed memory data redistribution method and system, and master control server | |
CN105095495A (en) | Distributed file system cache management method and system | |
CN102480502A (en) | I/O load equilibrium method and I/O server | |
CN101800768A (en) | Gridding data transcription generation method based on storage alliance subset partition | |
US20140310321A1 (en) | Information processing apparatus, data management method, and program | |
CN109819013A (en) | A kind of block chain memory capacity optimization method based on cloud storage | |
CN102546230A (en) | Overlay-network topological optimization method in P2P (Peer-To-Peer) streaming media system | |
CN104767822A (en) | Data storage method based on version | |
CN113468200B (en) | Method and device for expanding fragments in block chain system | |
CN108459926B (en) | Data remote backup method and device and computer readable medium | |
CN112929432B (en) | Broadcasting method, equipment and storage medium based on repeated broadcasting history |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200811 |