CN110795157A - Method for increasing starting speed of diskless workstation by using limited cache - Google Patents

Method for increasing starting speed of diskless workstation by using limited cache Download PDF

Info

Publication number
CN110795157A
CN110795157A CN201911024635.4A CN201911024635A CN110795157A CN 110795157 A CN110795157 A CN 110795157A CN 201911024635 A CN201911024635 A CN 201911024635A CN 110795157 A CN110795157 A CN 110795157A
Authority
CN
China
Prior art keywords
data
diskless
read
cache
workstation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911024635.4A
Other languages
Chinese (zh)
Other versions
CN110795157B (en
Inventor
李广斌
郝岩
杨程雲
林芳菲
郭月丰
彭寿林
吴建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU SHUNWANG TECHNOLOGY CO LTD
Original Assignee
HANGZHOU SHUNWANG TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU SHUNWANG TECHNOLOGY CO LTD filed Critical HANGZHOU SHUNWANG TECHNOLOGY CO LTD
Priority to CN201911024635.4A priority Critical patent/CN110795157B/en
Publication of CN110795157A publication Critical patent/CN110795157A/en
Application granted granted Critical
Publication of CN110795157B publication Critical patent/CN110795157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a method for improving the starting speed of a diskless workstation by using a limited cache. Meanwhile, by using the local principle, the startup data similarity of all diskless workstations under the same application scene is very high, and the startup data required by other diskless workstations during startup can be predicted according to the known startup data of the diskless workstations.

Description

Method for increasing starting speed of diskless workstation by using limited cache
Technical Field
The invention belongs to the technical field of diskless computers and data prefetching, and particularly relates to a method for improving the boot speed of a diskless workstation by using a limited cache.
Background
In recent years, internet traffic has grown without a synchronous increase in internet capacity, the net effect of this growth is a significant increase in user perceived delay, i.e., the time between when a client issues a data request and when a response arrives. Potential sources of delay are heavy loading of the Web server, network congestion, low bandwidth, bandwidth under-utilization, and propagation delay.
Meanwhile, with the development of processors, the difference between the response speed of data in network transmission and the speed of the processors is larger and larger; data required in operation requests the cloud server, and the time for network transmission is waited, so that the overall operation speed of the application and the user experience are seriously influenced. In order to shorten the time of network transmission, the solution adopted conventionally is to increase the bandwidth and the proximity of the server to shorten the distance, but both of the two aspects are based on infrastructure and are difficult to improve after the highest conditions that can be met are reached.
Another solution to reduce latency is to cache Web files at various points in the network (client/proxy/server), the caching exploits temporal locality, and efficient client caching reduces client-aware latency, server load, and the number of traveling packets, thereby increasing available bandwidth, which involves data prefetching (prefetching) techniques.
Data prefetching refers to the process of inferring future requests of clients to Web objects, and putting these objects into cache in advance in the background before the clients make explicit requests, taking advantage of the idle time of the clients. The main advantage of using prefetching is that it can prevent bandwidth under-utilization and hide partial delays; furthermore, without a well-designed prefetching scheme, the client may not use multiple transmission documents at all, wasting bandwidth; an effective pre-fetch scheme in combination with a transmission rate control mechanism can shape the network traffic, significantly reducing its burstiness, and thus improve network performance.
A distributed cache model (DLSDCM) based on file prediction is used for carrying out file prediction on the basis of a client file prediction model, overall scheduling is carried out on all user requests in a distributed network from a server perspective, data access of other clients is not influenced while the throughput and the data access of the client are improved, and the realization of the DLSDCM is divided into two parts, namely the client and the server. Each client maintains a copy of DLS (double laser cooler) file prediction data on the local computer, and when reading a request each time, two files behind a predicted read request target file are pre-read at the same time, and the files and hit times which are mainly requested in the last times are predicted; the server side maintains two queues: and the read request queue and the pre-read request queue are responsible for the read request scheduling and the pre-read request scheduling of the client.
The DLSDCM client is realized by pre-reading file data predicted by a DLSDCM model, storing overlarge data in a local disk, applying a certain-size memory (the size can be manually modified) to the client for a read request cache and a pre-read request cache, and creating a cache directory on the disk; and when the pre-read file is larger than the memory cache size, writing the pre-read file into a disk cache directory. The read request cache region and the pre-read request cache region can be reserved for a period of time after one-time read request is finished, and if no new read request exists for a long time, the read request cache region and the pre-read request cache region are recycled; and data in the disk cache directory is reserved for a long time, and when the data in the cache directory is about to reach a specified value, the earliest stored data is recovered by adopting a strategy of storing the data first and recovering the data first.
However, the prior art does not support the continuous storage of larger files by using a smaller cache, but stores the request file into the cache once, after the request is read once, the request is kept for a period of time, the cache space is emptied, and the file queue in the cache directory is read into the full cache again; therefore, a large memory space is occupied as a cache, and each time data in the cache is read out, the cache needs to be interrupted for a period of time, the original file is reserved for a period of time, then the cache is emptied, and a new file is read in from the server. Therefore, due to the limitation of the cache space, the process of providing the prediction file for the client by using the cache in the prior art is not continuous and discontinuous.
Disclosure of Invention
In view of the above, the present invention provides a method for increasing the boot speed of a diskless workstation by using a limited cache, which strategically synchronizes the local operation and the network transmission, and reduces the boot latency.
A method for using limited buffer memory to promote the starting speed of a diskless workstation, namely, the data required by the starting of the diskless workstation is obtained through prediction, the diskless workstation requests the required data from a server when starting, and the server sequentially sends the corresponding data to the diskless workstation for buffer memory; the diskless workstation operates and uses the existing data in the cache while waiting for the data transmitted by the server, clears the data from the cache after each operation and uses a part of the data, and stores the received data into the vacated cache space until all the data are operated and used, thus completing the startup.
Furthermore, in the specific implementation process, the server marks serial numbers of data required by starting the diskless workstation according to the request sequence of the diskless workstation, when the diskless workstation is started, the server sends a first batch of data to the diskless workstation for caching, and the diskless workstation runs and uses the batch of data immediately; when the diskless workstation runs and uses data, the server sends the next batch of data to the diskless workstation for caching, a plurality of steps of startup data are sent in advance each time, and when the diskless workstation runs out of part of data, the caching space is correspondingly released, and the released space is used for storing a plurality of newly received steps of startup data.
Furthermore, the data prefetched by the diskless workstation always keeps a plurality of steps ahead of the data currently used in operation, and the corresponding amount of data is prefetched by adopting an unfixed number of steps, namely referring to the residual storage space of the cache.
Further, the diskless workstation cache may employ a fixed size region of local memory, or any other separate storage space.
Furthermore, based on the locality principle, that is, the boot data similarity of each diskless workstation in the same application scene is high, the boot data required by other diskless workstations during starting is predicted according to the known boot data of the diskless workstation.
Further, the diskless workstation adopts a staged pre-reading strategy and introduces a pre-reading virtual slider in the process of running and using the cache data, specifically: dividing the whole startup data into a plurality of stages in sequence, and pre-reading isolation of each stage; when the current read-write is carried out, if the new read-write request is not near the current read-write range, searching the read-write range matched with the request, namely generating a new read-write virtual slider; when the continuous multiple read-write requests correspond to the new pre-read virtual slider, jumping the pre-read range to the pre-read virtual slider; when a few individual read/write requests correspond to the new read-ahead virtual slider, the read-ahead virtual slider is considered to be noise, and the read-ahead virtual slider is discarded. The strategy increases certain flexibility and jumping performance, the hit rate can be improved when the change of system startup data is large, data interleaving is prevented, and especially when a pre-reading virtual slider is introduced, the phenomenon that the jump is too serious and deviates from pre-reading, so that the hit rate is reduced is prevented.
Furthermore, for the pre-reading data management strategy of a plurality of diskless workstations, firstly, unique identification is made according to software and hardware, so that pre-reading data of different software and hardware are separately stored; meanwhile, the server regularly updates the pre-reading data according to a certain strategy, and summarizes multiple starting data of multiple terminals (such as starting data with more occurrence frequency according to the starting times, or starting data with more occurrence frequency according to the terminals, or finding multiple terminal starting data with the highest similarity to other starting data), so as to form a complete pre-reading data.
The method downloads the predictable data from the server in advance and stores the data in the local cache, the time is earlier than the request time, the waiting time after the data is requested is saved, and the service operation time of the client/server model is effectively shortened; meanwhile, by utilizing the local principle, the startup data similarity of all diskless workstations under the same application scene is very high, and the startup data required by other diskless workstations during startup can be predicted according to the known startup data of the diskless workstations.
Besides the diskless startup scenario, the method has potential in principle and is suitable for being popularized and applied to more services or applications (such as cloud services, web pages and mobile phone applications), the effect of shortening response time is achieved by downloading predictable data to a local cache region in advance, meanwhile, the size of the cache is supported to be set, excessive cache is not occupied, and space can be released after the services or the applications are finished.
Drawings
FIG. 1 is a schematic diagram illustrating a boot process of a diskless workstation without prefetch read techniques.
FIG. 2 is a schematic diagram illustrating a boot process of a diskless workstation based on a prefetch read-ahead technique according to the present invention.
FIG. 3 is a schematic view of a buffering space of a diskless workstation.
FIG. 4 is a step diagram of all boot data.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
In the network cloud products of the company, a lot of services need to be operated in the starting process, and the operation process needs more time, so that the starting time is long, and the user experience is influenced by the overlong starting time; the cache space is limited, the predicted startup data is far larger than the cache space, the data requested by the predicted client computer is always stored in the cache in advance, the data is used before the client computer runs, and the data does not need to be transmitted from the server side.
Originally, when a diskless workstation is started to operate, the required startup data needs to be requested to a system server, the server sends the data after receiving the request, and the diskless workstation needs to wait until the data is received and then continues to operate. In order to reduce or even eliminate the waiting time for the transmission of the startup data to shorten the startup time, the data requested by the system server side is stored in a limited cache of the diskless workstation in advance, the data can be directly cached from the cache during the operation, the network transmission is not required to be waited, the operation and the data transmission are synchronously carried out, the cache is a part of special storage space in the memory of the diskless workstation, and the space size can be set.
This technique contains two main points:
① after predicting the data needed by the diskless workstation, the data packet is sent to the diskless workstation to be buffered in advance when the diskless workstation is started.
② the buffer space is smaller than the startup data and can not send the complete startup data at one time, the server sends a part of data each time, after the disk-less workstation uses a part of data when starting, the server sends and writes several steps of subsequent data into the corresponding buffer space, the written data always leads the data in operation by several steps.
If the similar technology is not used, data are sent from the cloud server after the client requests, the client needs to wait for transmission time after requesting the data each time, and the starting time is greatly prolonged; conventionally, to reduce the data transmission time, a method of increasing the bandwidth and bringing the server close is adopted. The technology of the invention further effectively utilizes idle bandwidth and effectively shortens time from a lower layer by predicting and transmitting the data packet in advance.
If the startup data is stored in the diskless workstation for caching in advance before startup, firstly, in a network cloud environment, the diskless workstation starts up the memory for emptying every time, including the cache for storing the startup data; and secondly, the startup data is dynamic and depends on the unified management of cloud services. By using the technical scheme of the invention, the diskless workstation takes all the previous prediction information out of the server, and then predicts the downloaded data in advance according to the current progress of the diskless workstation.
As shown in fig. 1, under the ordinary condition that similar technologies are not used, after a boot event, a client needs to request data from a server many times, and the step of continuing to operate after waiting for data returned by the server each time is repeated until the boot is completed; various services provided by cloud along the network run in the starting process, and a client needs to download large starting data from a server.
As shown in fig. 2, when the technical solution of the present invention is used, after a boot event, the server sends the predicted previous part of boot data to the diskless workstation for caching, and the diskless workstation requests data from the local cache many times, so that the speed is fast, and the released cache receives the next part of data.
If the application scene of the cloud in the internet is the internet bar, the starting data of the same internet bar corresponds to the system and software configuration of the internet bar to each terminal, and the terminals in the scene of the internet bar are used as diskless workstations, so that the starting data similarity rate of each diskless workstation in the same internet bar is high. After the network management is configured, when the first diskless workstation requests data for startup, the server acquires the startup data, so that the same startup data needed by the rest diskless workstations can be conveniently predicted, and the local principle prediction is similar.
However, the diskless workstation has limited cache and is not enough to store complete startup data, and the technical scheme of the invention comprises the following steps:
(1) the boot data is indexed by the requesting step (step).
(2) And then when the diskless workstation is started, the server sends a first batch of data to be stored in the cache of the diskless workstation, and when the diskless workstation processes the batch of data, the server sends the data to be requested by the next batch of diskless workstation to the cache, and the starting data is sent in advance for a plurality of steps each time.
(3) And each time the diskless workstation uses a part of data, releasing the corresponding cache space, and storing the newly received data of a plurality of steps by the released space.
Finally, in the starting process, the data transmission reaches the diskless workstation for caching before the data is requested, and the requested data is taken from the cache, so that the time for waiting for network transmission is saved, and the operation and the data transmission are carried out simultaneously.
Because the buffer space is limited, data needs to be written into the buffer and released in order, in order to realize the step (2), the data of the open-top computer is transmitted, received, read and written by taking a plurality of steps as granularity (for example, 200 steps), and the number of the steps can be set. The server side sends a part of data each time, and releases corresponding space each time when the diskless workstation uses the data in the cache; the server side continuously writes the future request data into the client side along with the data use and cache release of the client side, so that the writing step number is always fixed a plurality of steps ahead of the running data (the fixed step number can be set in advance), and the cache keeps storing the predicted data ahead of the cache considering that the client side can run fast sometimes. The method comprises the following specific implementation steps of setting the step number as 200:
2.1 the diskless client starts up for the first time, records the disk index (the position of the sector number read by the operating system and the length of the read content) read by the operating system, and reports the data to the server.
And 2.2 starting up the diskless client for the second time, acquiring the started index data from the server, and sending 200 reading requests to the service according to the index data.
2.3 after the operating system sends the first read request, the data is already in the memory and does not need to be obtained from the server.
2.4 when receiving a read request of the operating system, sending 2 read requests to the server according to the sequence of the disk index data, and keeping the read content of the driver to be ahead of the read request of the operating system.
2.5 the content that the operating system needs to read can be directly obtained from the memory, and does not need to obtain from the server, thereby greatly reducing the corresponding time of disk reading and accelerating the starting speed.
Fig. 3 shows the caching of the diskless workstation, and the sequence numbers in the figure represent the sequence of caching data from the diskless workstation during operation, which shows that the cache is released and the data is written by the server side according to the step sequence. Because the operation and transmission rhythms are different and the time consumption is unstable, a space with inconsistent rhythms is reserved for the operation and transmission in advance by a plurality of steps, and the data transmission is not in place when the operation speed is high.
Fig. 4 refers to all startup data, which shows that the data sent in advance by the server is "pushed forward" along with the operation data of the diskless workstation, because the step number sent in advance is set first, the step number is not changed, every time the diskless workstation uses a part of data in operation and requests the next data, the server sends the data of [ requested step + several steps ], so that the data sent in advance is always ahead of the fixed step number of the data used in operation.
In the product usage of the enterprise we have launched, reducing boot time is the main effect observed. The following data were obtained by testing: in a Windows7 system, the boot speed is 100.5s without the technology, the boot speed after the technology is used is 41.1s, and the boot speed is improved by 59 percent; in a Windows10 system, the starting speed without the technology is 135s, the starting speed after the technology is used is 82s, and the starting speed is improved by 41.3 percent; therefore, the application of the technology of the invention effectively shortens the time of each downloading node at one time, saves the workload of shortening the running time by optimizing other programs and improving the code running efficiency, and compared with the effect of the latter, the cost is high.
The embodiments described above are presented to enable a person having ordinary skill in the art to make and use the invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (8)

1. A method for increasing the boot speed of a diskless workstation by using a limited cache is characterized in that: acquiring data required by starting the diskless workstation through prediction, requesting the required data from a server when the diskless workstation is started, and sequentially sending corresponding data to the diskless workstation for caching by the server; the diskless workstation operates and uses the existing data in the cache while waiting for the data transmitted by the server, clears the data from the cache after each operation and uses a part of the data, and stores the received data into the vacated cache space until all the data are operated and used, thus completing the startup.
2. The method for increasing boot speed of diskless workstations using limited caching of claim 1, further comprising: in the specific implementation process, the server marks serial numbers of data required by starting the diskless workstation according to the request sequence of the diskless workstation, when the diskless workstation is started, the server sends a first batch of data to the diskless workstation for caching, and the diskless workstation runs and uses the batch of data immediately; when the diskless workstation runs and uses data, the server sends the next batch of data to the diskless workstation for caching, a plurality of steps of startup data are sent in advance each time, and when the diskless workstation runs out of part of data, the caching space is correspondingly released, and the released space is used for storing a plurality of newly received steps of startup data.
3. The method for increasing boot speed of diskless workstations using limited caching of claim 1, further comprising: the data prefetched by the diskless workstation always keeps a plurality of steps ahead of the data used in the current operation, and the data with the corresponding amount is prefetched by adopting an unfixed step number, namely referring to the residual storage space of the cache.
4. The method for increasing boot speed of diskless workstations using limited caching of claim 1, further comprising: the diskless workstation cache may take the form of a fixed size region in local memory, or any other separate storage space.
5. The method for increasing boot speed of diskless workstations using limited caching of claim 1, further comprising: based on the locality principle, namely the startup data similarity of each diskless workstation under the same application scene is high, the startup data required by other diskless workstations when starting is predicted according to the known startup data of the diskless workstation.
6. The method for increasing boot speed of diskless workstations using limited caching of claim 1, further comprising: the method comprises the following steps that a stage pre-reading strategy is adopted and a pre-reading virtual slider is introduced in the process of using cache data during operation of the diskless workstation, and specifically: dividing the whole startup data into a plurality of stages in sequence, and pre-reading isolation of each stage; when the current read-write is carried out, if the new read-write request is not near the current read-write range, searching the read-write range matched with the request, namely generating a new read-write virtual slider; when the continuous multiple read-write requests correspond to the new pre-read virtual slider, jumping the pre-read range to the pre-read virtual slider; when a few individual read/write requests correspond to the new read-ahead virtual slider, the read-ahead virtual slider is considered to be noise, and the read-ahead virtual slider is discarded.
7. The method for increasing boot speed of diskless workstations using limited caching of claim 1, further comprising: for the pre-reading data management strategy of a plurality of diskless workstations, firstly, unique identification is made according to software and hardware, so that pre-reading data of different software and hardware are separately stored; meanwhile, the server regularly updates the pre-reading data according to a certain strategy, and summarizes the multiple starting data of the multiple terminals to form a complete pre-reading data.
8. The method for increasing boot speed of diskless workstations using limited caching of claim 1, further comprising: the method downloads predictable data from the server in advance and stores the data in a local cache, the time is earlier than the request time, the waiting time after the data is requested is saved, and the service running time of a client/server model is effectively shortened; meanwhile, by utilizing the local principle, the startup data similarity of all diskless workstations under the same application scene is very high, and the startup data required by other diskless workstations during startup can be predicted according to the known startup data of the diskless workstations.
CN201911024635.4A 2019-10-25 2019-10-25 Method for improving starting-up speed of diskless workstation by using limited cache Active CN110795157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911024635.4A CN110795157B (en) 2019-10-25 2019-10-25 Method for improving starting-up speed of diskless workstation by using limited cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911024635.4A CN110795157B (en) 2019-10-25 2019-10-25 Method for improving starting-up speed of diskless workstation by using limited cache

Publications (2)

Publication Number Publication Date
CN110795157A true CN110795157A (en) 2020-02-14
CN110795157B CN110795157B (en) 2023-05-12

Family

ID=69441394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911024635.4A Active CN110795157B (en) 2019-10-25 2019-10-25 Method for improving starting-up speed of diskless workstation by using limited cache

Country Status (1)

Country Link
CN (1) CN110795157B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553857A (en) * 2022-01-25 2022-05-27 西安歌尔泰克电子科技有限公司 Data transmission method and device, wrist-worn equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040153694A1 (en) * 2002-11-26 2004-08-05 Microsoft Corporation Reliability of diskless network-bootable computers using non-volatile memory cache
US20090327453A1 (en) * 2008-06-30 2009-12-31 Yu Neng-Chien Method for improving data reading speed of a diskless computer
CN101814038A (en) * 2010-03-23 2010-08-25 杭州顺网科技股份有限公司 Method for increasing booting speed of computer
CN102323888A (en) * 2011-08-11 2012-01-18 杭州顺网科技股份有限公司 A kind of diskless computer starts accelerated method
CN104408209A (en) * 2014-12-25 2015-03-11 中科创达软件股份有限公司 File processing method, file processing device and electronic equipment in start-up process of mobile operating system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040153694A1 (en) * 2002-11-26 2004-08-05 Microsoft Corporation Reliability of diskless network-bootable computers using non-volatile memory cache
US20090327453A1 (en) * 2008-06-30 2009-12-31 Yu Neng-Chien Method for improving data reading speed of a diskless computer
CN101814038A (en) * 2010-03-23 2010-08-25 杭州顺网科技股份有限公司 Method for increasing booting speed of computer
CN102323888A (en) * 2011-08-11 2012-01-18 杭州顺网科技股份有限公司 A kind of diskless computer starts accelerated method
CN104408209A (en) * 2014-12-25 2015-03-11 中科创达软件股份有限公司 File processing method, file processing device and electronic equipment in start-up process of mobile operating system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谭怀亮;彭诗辉;贺再红;: "基于混合存储的无盘网络服务器数据优化分布方法" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553857A (en) * 2022-01-25 2022-05-27 西安歌尔泰克电子科技有限公司 Data transmission method and device, wrist-worn equipment and medium

Also Published As

Publication number Publication date
CN110795157B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN107197359B (en) Video file caching method and device
EP2791815B1 (en) Application-driven cdn pre-caching
CN105653684B (en) Pre-reading method and device of distributed file system
US6883068B2 (en) Methods and apparatus for implementing a chche replacement scheme
US20050086386A1 (en) Shared running-buffer-based caching system
US20170149860A1 (en) Partial prefetching of indexed content
US10862992B2 (en) Resource cache management method and system and apparatus
US11809330B2 (en) Information processing apparatus and method
CN106681990B (en) Data cached forecasting method under a kind of mobile cloud storage environment
WO2020199760A1 (en) Data storage method, memory and server
WO2021258881A1 (en) Data management method and system for application, and computer device
CN114281791A (en) Data access method, system, device and storage medium
US9178931B2 (en) Method and system for accessing data by a client from a server
CN103036948A (en) Network file processing method and execution node and software as a service (SaaS) platform
CN110795157B (en) Method for improving starting-up speed of diskless workstation by using limited cache
US8549274B2 (en) Distributive cache accessing device and method for accelerating to boot remote diskless computers
KR20190094690A (en) Storage server and adaptable prefetching method performed by the storage server in distributed file system
CN112631504A (en) Method and device for realizing local cache by using off-heap memory
CN111787062B (en) Wide area network file system-oriented adaptive fast increment pre-reading method
CN107491565B (en) Data synchronization method
CN110895515A (en) Memory cache management method, multimedia server and computer storage medium
WO2016090985A1 (en) Cache reading method and apparatus, and cache reading processing method and apparatus
WO2010031297A1 (en) Method of wireless application protocol (wap) gateway pull service and system thereof
CN109992209B (en) Data processing method and device and distributed storage system
JP5192506B2 (en) File cache management method, apparatus, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant