CN117806837B - Method, device, storage medium and system for managing hard disk tasks - Google Patents

Method, device, storage medium and system for managing hard disk tasks Download PDF

Info

Publication number
CN117806837B
CN117806837B CN202410226776.9A CN202410226776A CN117806837B CN 117806837 B CN117806837 B CN 117806837B CN 202410226776 A CN202410226776 A CN 202410226776A CN 117806837 B CN117806837 B CN 117806837B
Authority
CN
China
Prior art keywords
task
host
hard disk
prediction model
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410226776.9A
Other languages
Chinese (zh)
Other versions
CN117806837A (en
Inventor
王陈园
高美洲
付凤之
刘忞斋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Original Assignee
Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd filed Critical Shandong Yunhai Guochuang Cloud Computing Equipment Industry Innovation Center Co Ltd
Priority to CN202410226776.9A priority Critical patent/CN117806837B/en
Publication of CN117806837A publication Critical patent/CN117806837A/en
Application granted granted Critical
Publication of CN117806837B publication Critical patent/CN117806837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a method, a device, a storage medium and a system for managing hard disk tasks, and relates to the technical field of storage. The method comprises the following steps: predicting a host task by using a first prediction model, and predicting to obtain a predicted ending time of the first host task; in response to the completion of the execution of the first host task, judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than a preset time; executing the internal task of the hard disk in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being less than a preset time; and executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time. And according to the prediction of the model on the host task, the execution of the hard disk on the host task or the internal task of the hard disk is managed, the flash memory is simultaneously accessed by the host task and the internal task, and the read-write performance of the hard disk is improved.

Description

Method, device, storage medium and system for managing hard disk tasks
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a method, an apparatus, a storage medium, and a system for managing a hard disk task.
Background
Compared with a mechanical hard disk (HARD DISK DRIVE, HDD), the Solid state disk (Solid STATE DISK, SSD) has the advantages of higher access speed, higher data transmission rate, better durability and lower energy consumption, and is gradually replaced with the mechanical hard disk in the fields of commercial computers, servers and the like, so that the Solid state disk becomes the main force of a data storage medium. And the read-write command sent by the host to the solid state disk is responded and executed by the solid state disk controller. In addition to processing host commands, the solid state disk controller is also responsible for processing internal control and management tasks related to the flash memory, such as: garbage collection tasks (Garbage Collection, GC). Executing the internal task occupies the solid state disk resources and the bottom flash memory bandwidth, and the host task contends with the access of the internal task to the flash memory, which results in the degradation of the host read-write performance. With the continuous writing of data into the solid state disk, the execution frequency of internal tasks is increased, and the read-write performance of the host computer on the solid state disk is seriously affected.
Disclosure of Invention
In order to solve the problem that in the prior art, a host task contends with a flash memory with an internal task of a solid state disk, so that the read-write performance of the host to the solid state disk is reduced. The invention adopts the following technical scheme that based on model prediction, the host tasks and the internal tasks of the hard disk are managed, and the read/write performance of the host is improved.
In a first aspect, a method for managing a hard disk task is provided, including: predicting a host task by using a first prediction model, and predicting to obtain a predicted ending time of the first host task;
In response to the completion of the execution of the first host task, judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than a preset time; executing the internal task of the hard disk in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being less than a preset time, wherein the internal task of the hard disk comprises garbage collection; and executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time.
Further, before predicting the host task using the first prediction model, further comprising: judging whether the first prediction model is updated; and responding to the first prediction model to complete updating, and then using the updated first prediction model to conduct host task prediction.
Further, in response to the first prediction model not completing the update, performing host task prediction using the current first prediction model;
Continuing to train the first prediction model to obtain corresponding latest first prediction model parameters; and replacing the current first prediction model parameters with the corresponding latest first prediction model parameters to finish updating the first prediction model.
Further, the hard disk task management method further comprises the following steps: updating a host task data sample set every first preset time; training a first prediction model by adopting the updated host task data sample set to obtain the latest first prediction model parameters; the first predictive model is updated with the latest first predictive model parameters.
Further, predicting a host task using the first predictive model, comprising: acquiring a historical host task feature sample from host I/O task information; and taking the historical host task feature sample as a first prediction model input set, inputting a first prediction model, and obtaining a host task prediction result, wherein the host task prediction result comprises a first host task ending time.
Further, after obtaining the historical host task feature sample from the host I/O task information, the method further includes: preprocessing a historical host task feature sample;
and replacing the history host task feature sample with the preprocessed history host task feature sample to serve as a first prediction model input set, and inputting a first prediction model to obtain a host task prediction result.
Further, performing the internal tasks of the hard disk includes: starting an internal task, and obtaining the number of flash memory blocks to be recovered; calculating the predicted duration of the internal task and the execution speed of the internal task according to the number of flash memory blocks to be recovered, the predicted ending time of the first host task and the predicted starting time of the second host task; and executing the internal task at the internal task execution speed until the internal task is completed, wherein the internal task prediction duration is the difference between the prediction starting time of the second host task and the prediction ending time of the first host task, the prediction starting time of the second host task is obtained by prediction of the first prediction model, and the internal task execution speed is the ratio of the number of flash memory blocks to be recovered to the internal task prediction duration.
Further, the hard disk task management method further comprises the following steps: and in response to receiving the host task in the internal task execution process, executing the internal task at a preset speed until the internal task is completed.
Further, the first predictive model is a time domain convolutional network model.
Further, performing the next host task includes: acquiring a next host task request from a host task request queue, and judging the type of the next host task request; and responding to the next host task request as a writing request, and writing the data corresponding to the writing request into the corresponding position of the hard disk according to the writing request.
Further, writing the data corresponding to the writing request into the corresponding position of the hard disk according to the writing request, including:
Obtaining corresponding writing task parameters through a second prediction model according to the writing request, wherein the writing task parameters comprise data storage time; and calling the idle flash memory block according to the data storage duration, and storing the data corresponding to the writing request.
Further, according to the writing request, obtaining corresponding writing task parameters through the second prediction model, including:
Extracting writing task characteristics according to the writing request, wherein the writing task characteristics comprise: logical block address, data capacity; updating the writing task input set according to the writing task characteristics; and inputting the writing task input set into a second prediction model to obtain writing task parameters.
Further, after extracting the writing task feature according to the writing request, the method further includes: preprocessing the written task characteristics; the write task input set is updated with the processed write task characteristics instead of updating the write task input set with the write task characteristics.
Further, according to the data saving duration, calling the idle flash memory block to save the data corresponding to the writing request, including: ordering the idle flash memory blocks according to the order of the abrasion values to obtain an idle flash memory block queue; in response to the data storage time length being greater than or equal to the storage time length threshold, acquiring an idle flash memory block from one end with a larger abrasion value of an idle flash memory block queue, and storing corresponding data; and responding to the data storage time length being smaller than the storage time length threshold value, acquiring the idle flash memory block from the end with the smaller abrasion value of the idle flash memory block queue, and storing corresponding data.
Further, before obtaining the corresponding writing task parameter through the second prediction model according to the writing request, the method further includes: judging whether the second prediction model is updated; and responding to the second prediction model to complete updating, and acquiring the written task parameters by using the updated second prediction model.
Further, in response to the second predictive model not completing the update, obtaining the written task parameters using the current second predictive model; continuing to train the second prediction model to obtain corresponding latest second prediction model parameters; and replacing the current second prediction model parameters with the corresponding latest second prediction model parameters to finish updating the second prediction model.
Further, the hard disk task management method further comprises the following steps: updating the task data sample set to be written every second preset time; training a second prediction model by adopting the updated written task data sample set to obtain the latest second prediction model parameters; the second predictive model is updated with the latest second predictive model parameters.
Further, the hard disk task management method further comprises the following steps: and responding to the next host task as a reading request, and executing the reading task corresponding to the reading request.
Further, executing the read task corresponding to the read request includes: reading corresponding data from the flash memory according to the reading request; and decoding the read corresponding data, and transmitting the decoded data back to the host computer to complete the data reading process.
Further, in response to the host task queue being empty, executing the internal task; in response to completion of the internal task execution, the arrival of the next host task is awaited.
Further, the second predictive model is a time domain convolutional network model.
In a second aspect, there is provided a hard disk task management device, including: the prediction module is used for predicting the host task by using the first prediction model and obtaining the predicted ending time of the first host task in a prediction way; the judging module is used for judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than the preset time or not in response to the completion of the execution of the first host task; the internal task execution module is used for executing the internal task of the hard disk in response to the fact that the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than the preset time, wherein the internal task of the hard disk comprises garbage collection; and the host task execution module is used for executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time.
In a third aspect, a computer-readable storage medium is provided, on which a hard disk task management program is stored, which when executed by a processor, implements the hard disk task management method according to the first aspect.
In a fourth aspect, a hard disk task management system is provided, which includes a memory, a processor, and a hard disk task management program stored in the memory and executable on the processor, wherein the hard disk task management method according to the first aspect is implemented when the processor executes the hard disk task management program.
The technical scheme provided by the embodiment of the invention has the beneficial effects that: the method can manage the execution of the hard disk to the host task or the internal task of the hard disk according to the prediction of the model to the host task, reduce the simultaneous access of the host task and the internal task to the flash memory, and improve the read-write performance of the hard disk.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a solid state disk architecture with an added data prediction module;
FIG. 2 is a schematic diagram of a method for managing tasks of a hard disk according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a prediction model architecture according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an input sequence and an output sequence provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a device for managing tasks of a hard disk according to an embodiment of the present invention;
Fig. 6 is a schematic diagram of a hard disk task management system according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some examples of the present invention, not all examples. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Unless defined otherwise, technical or scientific terms used in this disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The terms "first," "second," and the like, as used in this disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. Likewise, the terms "a," "an," or "the" and similar terms do not denote a limitation of quantity, but rather denote the presence of at least one. The numerals in the drawings of the specification merely denote distinction of respective functional components or modules, and do not denote logical relationships between the components or modules. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
Hereinafter, various embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. Note that in the drawings, the same reference numerals are given to constituent parts having substantially the same or similar structures and functions, and repeated description thereof will be omitted.
In order to alleviate the problem that the read-write performance of the host is reduced due to the fact that the host task and the internal task access the flash memory simultaneously. In recent years, many methods of optimizing internal tasks (garbage collection, even wear) have emerged. The thermal data prediction method based on its learning (MACHINE LEARNING, ML) can be well applied to various workloads, as shown in fig. 1. Compared with the traditional solid state disk architecture, the data prediction module is added, and the echo state network is adopted to actively learn and predict the behavior rule of the thermal data, so that the identification of the thermal data is converted into the active prediction of the thermal data, and the separation of the cold data and the thermal data is realized. And further reduces the triggering times of garbage collection and reduces the average response time of host tasks.
However, most of the existing methods focus on optimizing internal tasks of the solid state disk, and lack a method for optimizing task allocation of the solid state disk from the perspective of collaborative work of host tasks and internal tasks. As solid state disks are continually written with data, the frequency at which internal tasks are triggered increases. Only optimizing the internal task of the solid state disk is difficult to further improve the read/write performance of the host task.
Aiming at the problem that in the prior art, a host task contends with a flash memory with an internal task of a solid state disk, so that the read-write performance of the host to the solid state disk is reduced. The invention adopts the following technical scheme that based on model prediction, the host tasks and the internal tasks of the hard disk are managed, and the read/write performance of the host is improved.
In some embodiments, as shown in fig. 2, a method for managing a hard disk task includes: s100: predicting a host task by using a first prediction model, and predicting to obtain a predicted ending time of the first host task;
S200: in response to the completion of the execution of the first host task, judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than a preset time; s300: executing the internal task of the hard disk in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being less than a preset time, wherein the internal task of the hard disk comprises garbage collection;
s400: and executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time.
The first prediction model is used for predicting the parameters of the host task to obtain the prediction type of the host task, the prediction starting time of the host task and the prediction ending time of the host task. Preferably, the prediction start time and the prediction end time are referenced to the host CPU time stamp. Accordingly, the first host task corresponds to the host task currently being executed, and the second host task corresponds to the host task to be executed next. The prediction type of the first host task can be obtained through the prediction of the first prediction model, the prediction starting time of the first host task and the prediction ending time of the first host task; the prediction type of the second host task, the prediction starting time of the second host task, and the prediction ending time of the second host task. The end time of the current host task is the start time of the task idle time period, and the start time of the next host task is the end time of the task idle time period; correspondingly, the predicted ending time of the first host task is the starting time of the task idle predicted time period, and the predicted starting time of the second host task is the ending time of the task idle predicted time period. Correspondingly, the end time of the current host task and the start time of the next host task are obtained from the host CPU time stamp. And the predicted starting time of the second host task is differenced from the predicted ending time of the first host task, so that the duration of the idle predicted time period of the task can be obtained. In the task idle time period, the solid state disk can execute internal tasks, wherein the internal tasks comprise: garbage collection, wear leveling (WEAR LEVELING, WL), and the like.
The preset time is used for measuring the prediction precision of the first prediction model and is obtained through setting.
The difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than the preset time, which indicates that the model prediction accuracy is high; if the difference between the actual end time of the first host task and the predicted end time of the first host task is greater than or equal to the preset time, the model prediction accuracy is low.
When the model prediction accuracy is high, the prediction accuracy for the task idle prediction period is high, and at this time, the hard disk can process the internal tasks of the hard disk, such as garbage collection, wear leveling, and the like, by using the time indicated by the task idle prediction period. Under the condition of low model prediction precision, the hard disk preferentially processes host tasks.
The host tasks include a read task and a write task.
Preferably, S100: before predicting the host task using the first predictive model, further comprising: s010: judging whether the first prediction model is updated; s020: and responding to the first prediction model to complete updating, and then using the updated first prediction model to conduct host task prediction.
Optionally, S020 ʹ: in response to the first prediction model not being updated, performing host task prediction using the current first prediction model; s021 ʹ: continuing to train the first prediction model to obtain corresponding latest first prediction model parameters; s022 ʹ: and replacing the current first prediction model parameters with the corresponding latest first prediction model parameters to finish updating the first prediction model.
Updating the first predictive model, i.e. updating the first predictive model parameters. The prediction and training of the first predictive model may be performed synchronously. When the first prediction model is not updated, performing host task prediction by adopting a first prediction model of old version first prediction model parameters; and (3) after the first prediction model is updated, carrying out host task prediction by adopting a first prediction model of new version of first prediction model parameters.
Preferably, the hard disk task management method further comprises: s001: updating a host task data sample set every first preset time; s002: training a first prediction model by adopting the updated host task data sample set to obtain the latest first prediction model parameters; s003: the first predictive model is updated with the latest first predictive model parameters.
The first predictive model is deployed on the host side, and the host task data sample set is a data set for training the first predictive model.
With the lapse of time and the running of loads, the number of host tasks is continuously increasing, and the host tasks are stored in the I/O trac as host I/O task information. When the time span of the data in the I/O trace is an integer multiple of the first period, expanding the host task data in the last first period to a host task data sample set, and retraining the first prediction model to improve the prediction precision of the first prediction model. The first period may be one week, half month, one month, etc. Preferably, the training of the first predictive model may be performed on a graphics processor (Graphics Processing Unit, GPU) of a computer or server. Each sample data in the host task data sample set is primarily focused on the following features: (IO type,Tstart,Tend), wherein IO type is the host task type, T start is the start time of the corresponding host task, and T end is the end time of the corresponding host task. A host task data sample set contains all task feature data that has been presented in the I/O trace. As the host task data sample set increases, the accuracy of the first predictive model will increase through training.
Preferably, all data in the host task data sample set is normalized data.
Specifically, S100: predicting a host task using a first predictive model, comprising: s110: acquiring a historical host task feature sample from host I/O task information; s120: and taking the historical host task feature sample as a first prediction model input set, inputting a first prediction model, and obtaining a host task prediction result, wherein the host task prediction result comprises a first host task ending time.
Historical host task feature samples obtained from host I/O task information include data in three aspects (IO type,Tstart,Tend). The historical host task feature sample contains all of the task feature data that was present in the I/O trace. As the size of the historical host feature samples increases, the prediction accuracy of the first prediction model will increase.
Preferably, S110: after obtaining the historical host task feature sample from the host I/O task information, the method further comprises the following steps:
S115: preprocessing a historical host task feature sample; s120 ʹ: and replacing the history host task feature sample with the preprocessed history host task feature sample to serve as a first prediction model input set, and inputting a first prediction model to obtain a host task prediction result.
In some preferred embodiments, the sample data in the first set of prediction model inputs is normalized data.
Specifically, performing the internal tasks of the hard disk includes: s310: starting an internal task, and obtaining the number of flash memory blocks to be recovered; s320: calculating the predicted duration of the internal task and the execution speed of the internal task according to the number of flash memory blocks to be recovered, the predicted ending time of the first host task and the predicted starting time of the second host task;
S330: and executing the internal task at the internal task execution speed until the internal task is completed in response to the duration of the internal task not exceeding the duration of the task idle prediction time period, wherein the duration of the internal task prediction time period is the task idle prediction time period, the duration of the task idle prediction time period is the difference between the prediction starting time of the second host task and the prediction ending time of the first host task, and the prediction starting time of the second host task is obtained by prediction of the first prediction model.
The internal task execution speed is calculated by the ratio of the number of flash memory blocks to be recovered to the predicted duration of the internal task.
Optionally, the hard disk task management method further includes: s340: and in response to receiving the host task in the internal task execution process, executing the internal task at a preset speed until the internal task is completed.
The preset speed is set. Typically, the preset speed is set to a minimum speed for internal tasks of the hard disk to yield sufficient bandwidth to handle host tasks.
Specifically, performing the next host task includes: s410: acquiring a next host task request from a host task request queue, and judging the type of the next host task request; s420: and responding to the next host task request as a writing request, and writing the data corresponding to the writing request into the corresponding position of the hard disk according to the writing request.
Specifically, writing data corresponding to a write request into a corresponding position of a hard disk according to the write request includes: s421: obtaining corresponding writing task parameters through a second prediction model according to the writing request, wherein the writing task parameters comprise data storage time; s422: and calling the idle flash memory block according to the data storage duration, and storing the data corresponding to the writing request.
Specifically, S421: according to the writing request, obtaining corresponding writing task parameters through a second prediction model, wherein the writing task parameters comprise: s4211: extracting writing task characteristics according to the writing request, wherein the writing task characteristics comprise: logical block address, data capacity; s4212: updating the writing task input set according to the writing task characteristics; s4213: and inputting the writing task input set into a second prediction model to obtain writing task parameters.
Alternatively, S4211: after extracting the writing task features according to the writing request, the method further comprises the following steps: s42111 ʹ: preprocessing the written task characteristics; s4212 ʹ: the write task input set is updated with the processed write task characteristics instead of updating the write task input set with the write task characteristics.
Specifically, S422: calling the idle flash memory block according to the data storage duration, and storing the data corresponding to the writing request, wherein the method comprises the following steps: s4221: ordering the idle flash memory blocks according to the order of the abrasion values to obtain an idle flash memory block queue; s4222: in response to the data storage time length being greater than or equal to the storage time length threshold, acquiring an idle flash memory block from one end with a larger abrasion value of an idle flash memory block queue, and storing corresponding data; s4223: and responding to the data storage time length being smaller than the storage time length threshold value, acquiring the idle flash memory block from the end with the smaller abrasion value of the idle flash memory block queue, and storing corresponding data.
The wear value (PE) is the number of times a flash block can be erased, reflecting the life of the flash block. The smaller the wear value of a flash block, the greater the number of times it can be erased; the larger the wear value of a flash block, the smaller the number of times it can be erased. When the wear value of the flash memory block approaches the rated wear value, the failure risk is greatly increased. It is not suitable to save data.
The data retention period indicates a temporal prediction of the corresponding data to remain valid data by the second prediction model. When the logical address corresponding to the flash memory block storing the data is written with the data again, the data on the flash memory block is invalid. The flash blocks holding invalid data will be erased during garbage collection as flash blocks to be collected. And after the flash memory block to be recycled is erased, obtaining an idle flash memory block.
Defining the data with the data storage time length being greater than or equal to the storage time length threshold value as cold data and the data with the data storage time length being less than the storage time length threshold value as hot data. In order to balance the abrasion value of the flash memory blocks in the solid state disk, the abrasion among the flash memory blocks is uniform, and the situation of local bad blocks is avoided. Cold data is stored in the free flash memory blocks with larger wear values, and hot data is stored in the free flash memory blocks with smaller wear values.
Correspondingly, the flash memory blocks for storing cold data form a cold pool; flash blocks that hold hot data constitute a hot pool. The lower limit of the data storage time of the cold pool is the threshold value of the storage time.
Preferably, S421: before obtaining the corresponding writing task parameters through the second prediction model according to the writing request, the method further comprises the following steps: s4201: judging whether the second prediction model is updated; s4202: and responding to the second prediction model to complete updating, and acquiring the written task parameters by using the updated second prediction model.
Optionally, S4202 ʹ: responding to incomplete updating of the second prediction model, and acquiring write-in task parameters by using the current second prediction model; s4203 ʹ: continuing to train the second prediction model to obtain corresponding latest second prediction model parameters; s4204 ʹ: and replacing the current second prediction model parameters with the corresponding latest second prediction model parameters to finish updating the second prediction model.
Preferably, the hard disk task management method further comprises: s401: updating the task data sample set to be written every second preset time; s402: training a second prediction model by adopting the updated written task data sample set to obtain the latest second prediction model parameters; s403: the second predictive model is updated with the latest second predictive model parameters.
The second prediction model is deployed on the solid state disk side and is used for evaluating the data storage time and the data writing time of each writing data.
The data writing time refers to the time taken to write data to the hard disk.
And (3) along with the time lapse and the running of the load, the number of the hard disks written is continuously increased, and when the time span is an integer multiple of the second period, the host task data in the last second period is expanded to a written task data sample set, and the second prediction model is retrained so as to improve the prediction precision of the second prediction model. The second period may be one week, half month, one month, etc. Each sample data written in the task data sample set is primarily focused on the following features: (LBA, IO len,IOseq), where LBA is logical block address, IO len is data capacity, IO seq is write type. The write task data sample set contains all the data characteristics written to the hard disk. As the write hard disk data increases, the capacity of the write task sample set increases and the accuracy of the second predictive model will increase through training.
Optionally, the hard disk task management method further includes: s420 ʹ: and responding to the next host task as a reading request, and executing the reading task corresponding to the reading request.
Specifically, executing the read task corresponding to the read request includes: s421 ʹ: reading corresponding data from the flash memory according to the reading request; s422 ʹ: and decoding the read corresponding data, and transmitting the decoded data back to the host computer to complete the data reading process.
Preferably, the read corresponding data is ECC decoded.
Optionally, in response to the host task queue being empty, S411 ʹ: executing the internal tasks;
S412 ʹ: in response to completion of the internal task execution, the arrival of the next host task is awaited.
Specifically, the first prediction model and the second prediction model are time domain convolution network (Temporal Convolutional Network, TCN) models.
The principle of applying the time domain convolution network and the same to the first prediction model and the second prediction model will be briefly described below.
Fig. 3 shows a predictive model architecture, a first and a second predictive model architecture. The method comprises a feature sequence input layer, an embedding layer, feature fusion layer, a Dropout layer, an enhancement layer and an output layer, wherein the enhancement layer consists of a plurality of residual blocks (such as 100 or 500), the residual blocks mainly consist of an expansion causal convolution, and the calculation formula of the expansion causal convolution is as follows:
Wherein the input sequence is The convolution filter isThe convolution kernel size is k. d is an expansion factor, and the value is 2 A-1, wherein A is a natural number. The spreading factors of the different convolutional layers are different,As a feature of the previous layer,Indicating the direction of time series evolution. The receptive field of the convolution kernel is. The output of the residual block is: To activate the function.
FIG. 4 is an exemplary diagram of an input sequence and an output sequence, the input sequence being a host task data sample set, or a write task data sample set.
The input length is the length of the input sequence, the output length is the length of the predicted result, and the lengths of the input sequence and the output sequence are equal. For example: if the next data is to be predicted, the output length is 1.
The length of the input sequence and the output sequence may be empirically set, such as to 100, and the overlap length of the target sequence and the input sequence may be set to the difference between the input length and the output length.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In other embodiments, as shown in fig. 5, a hard disk task management device includes: the prediction module is used for predicting the host task by using the first prediction model and obtaining the predicted ending time of the first host task in a prediction way; the judging module is used for judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than the preset time or not in response to the completion of the execution of the first host task;
The internal task execution module is used for executing the internal task of the hard disk in response to the fact that the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than the preset time, wherein the internal task of the hard disk comprises garbage collection; and the host task execution module is used for executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time.
The specific limitation of the hard disk task management device can be referred to the limitation of the hard disk task management method hereinabove, and the description thereof will not be repeated here. The above-described modules in the hard disk task management device may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In other embodiments, a computer readable storage medium has stored thereon a hard disk task management program that when executed by a processor implements the hard disk task management method of the first aspect. The method for managing the hard disk task is described in detail above and will not be described in detail here.
In still other embodiments, as shown in fig. 6, a hard disk task management system is provided, which includes a memory, a processor, and a hard disk task management program stored in the memory and capable of running on the processor, where the processor implements the hard disk task management method according to the first aspect when executing the hard disk task management program. The method for managing the hard disk task is described in detail above and will not be described in detail here.
By implementing the method, the device, the storage medium and the system for managing the hard disk task, disclosed by the embodiment of the invention, the execution of the hard disk to the host task or the internal task of the hard disk can be managed according to the prediction of the model to the host task, the flash memory is simultaneously accessed by the host task and the internal task, and the read-write performance of the hard disk is improved; when the access to the flash memory is unavoidable, the execution speed of the internal task is reduced, and the host task is processed with a larger bandwidth; and calculating the characteristics of the written data through the model, respectively storing the characteristics of the written data into corresponding flash memory blocks according to the cold and hot data attributes, and homogenizing the abrasion value of the flash memory blocks to prolong the service time of the solid state disk.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present invention, which is not described herein.
Example 1
A method for managing hard disk tasks, as shown in fig. 2, includes: s100: predicting a host task by using a first prediction model, and predicting to obtain a predicted ending time of the first host task; s200: in response to the completion of the execution of the first host task, judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than a preset time; s300: executing the internal task of the hard disk in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being less than a preset time, wherein the internal task of the hard disk comprises garbage collection; s400: and executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time.
Example two
On the basis of embodiment one, preferably, S100: before predicting the host task using the first predictive model, further comprising: s010: judging whether the first prediction model is updated; s020: and responding to the first prediction model to complete updating, and then using the updated first prediction model to conduct host task prediction.
Optionally, S020 ʹ: in response to the first prediction model not being updated, performing host task prediction using the current first prediction model; s021 ʹ: continuing to train the first prediction model to obtain corresponding latest first prediction model parameters; s022 ʹ: and replacing the current first prediction model parameters with the corresponding latest first prediction model parameters to finish updating the first prediction model.
Preferably, the hard disk task management method further comprises: s001: updating a host task data sample set every first preset time; s002: training a first prediction model by adopting the updated host task data sample set to obtain the latest first prediction model parameters; s003: the first predictive model is updated with the latest first predictive model parameters.
Specifically, S100: predicting a host task using a first predictive model, comprising: s110: acquiring a historical host task feature sample from host I/O task information; s120: and taking the historical host task feature sample as a first prediction model input set, inputting a first prediction model, and obtaining a host task prediction result, wherein the host task prediction result comprises a first host task ending time.
Preferably, S110: after obtaining the historical host task feature sample from the host I/O task information, the method further comprises the following steps: s115: preprocessing a historical host task feature sample; s120 ʹ: and replacing the history host task feature sample with the preprocessed history host task feature sample to serve as a first prediction model input set, and inputting a first prediction model to obtain a host task prediction result.
Specifically, performing the internal tasks of the hard disk includes: s310: starting an internal task, and obtaining the number of flash memory blocks to be recovered; s320: calculating the predicted duration of the internal task and the execution speed of the internal task according to the number of flash memory blocks to be recovered, the predicted ending time of the first host task and the predicted starting time of the second host task; s330: and executing the internal task at the internal task execution speed until the internal task is completed, wherein the internal task prediction duration is the difference between the prediction starting time of the second host task and the prediction ending time of the first host task, the prediction starting time of the second host task is obtained by prediction of the first prediction model, and the internal task execution speed is the ratio of the number of flash memory blocks to be recovered to the internal task prediction duration.
Optionally, the hard disk task management method further includes: s340: and in response to receiving the host task in the internal task execution process, executing the internal task at a preset speed until the internal task is completed.
Specifically, performing the next host task includes: s410: acquiring a next host task request from a host task request queue, and judging the type of the next host task request; s420: and responding to the next host task request as a writing request, and writing the data corresponding to the writing request into the corresponding position of the hard disk according to the writing request.
Specifically, writing data corresponding to a write request into a corresponding position of a hard disk according to the write request includes: s421: obtaining corresponding writing task parameters through a second prediction model according to the writing request, wherein the writing task parameters comprise data storage time; s422: and calling the idle flash memory block according to the data storage duration, and storing the data corresponding to the writing request.
Specifically, S421: according to the writing request, obtaining corresponding writing task parameters through a second prediction model, wherein the writing task parameters comprise: s4211: extracting writing task characteristics according to the writing request, wherein the writing task characteristics comprise: logical block address, data capacity; s4212: updating the writing task input set according to the writing task characteristics; s4213: and inputting the writing task input set into a second prediction model to obtain writing task parameters.
Alternatively, S4211: after extracting the writing task features according to the writing request, the method further comprises the following steps: s42111 ʹ: preprocessing the written task characteristics; s4212 ʹ: the write task input set is updated with the processed write task characteristics instead of updating the write task input set with the write task characteristics.
Specifically, S422: calling the idle flash memory block according to the data storage duration, and storing the data corresponding to the writing request, wherein the method comprises the following steps: s4221: ordering the idle flash memory blocks according to the order of the abrasion values to obtain an idle flash memory block queue; s4222: in response to the data storage time length being greater than or equal to the storage time length threshold, acquiring an idle flash memory block from one end with a larger abrasion value of an idle flash memory block queue, and storing corresponding data; s4223: and responding to the data storage time length being smaller than the storage time length threshold value, acquiring the idle flash memory block from the end with the smaller abrasion value of the idle flash memory block queue, and storing corresponding data.
Preferably, S421: before obtaining the corresponding writing task parameters through the second prediction model according to the writing request, the method further comprises the following steps: s4201: judging whether the second prediction model is updated; s4202: and responding to the second prediction model to complete updating, and acquiring the written task parameters by using the updated second prediction model.
Optionally, S4202 ʹ: responding to incomplete updating of the second prediction model, and acquiring write-in task parameters by using the current second prediction model; s4203 ʹ: continuing to train the second prediction model to obtain corresponding latest second prediction model parameters; s4204 ʹ: and replacing the current second prediction model parameters with the corresponding latest second prediction model parameters to finish updating the second prediction model.
Preferably, the hard disk task management method further comprises: s401: updating the task data sample set to be written every second preset time; s402: training a second prediction model by adopting the updated written task data sample set to obtain the latest second prediction model parameters; s403: the second predictive model is updated with the latest second predictive model parameters.
Optionally, the hard disk task management method further includes: s420 ʹ: and responding to the next host task as a reading request, and executing the reading task corresponding to the reading request.
Specifically, executing the read task corresponding to the read request includes: s421 ʹ: reading corresponding data from the flash memory according to the reading request; s422 ʹ: and decoding the read corresponding data, and transmitting the decoded data back to the host computer to complete the data reading process.
Optionally, in response to the host task queue being empty, S411 ʹ: executing the internal tasks; s412 ʹ: in response to completion of the internal task execution, the arrival of the next host task is awaited.
Specifically, the first prediction model and the second prediction model are time domain convolution network models.
Example III
A hard disk task management device, as shown in fig. 5, comprising:
the prediction module is used for predicting the host task by using the first prediction model and obtaining the predicted ending time of the first host task in a prediction way; the judging module is used for judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than the preset time or not in response to the completion of the execution of the first host task;
The internal task execution module is used for executing the internal task of the hard disk in response to the fact that the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than the preset time, wherein the internal task of the hard disk comprises garbage collection; and the host task execution module is used for executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time.
Example IV
In still other embodiments, a computer-readable storage medium has stored thereon a hard disk task management program that, when executed by a processor, implements the hard disk task management method described in embodiment one or embodiment two. The method for managing the hard disk task is described in detail above and will not be described in detail here.
Example five
In other embodiments, as shown in fig. 6, a hard disk task management system is characterized by comprising a memory, a processor, and a hard disk task management program stored in the memory and capable of running on the processor, wherein the hard disk task management method described in the first embodiment or the second embodiment is implemented when the processor executes the hard disk task management program. The method for managing the hard disk task is described in detail above and will not be described in detail here.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program loaded on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device, or from memory, or from ROM. The above-described functions defined in the method of the embodiment of the present application are performed when the computer program is executed by an external processor.
It should be noted that, the computer readable medium of the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in embodiments of the present application, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (Radio Frequency), and the like, or any suitable combination thereof.
The computer readable medium may be contained in the server; or may exist alone without being assembled into the server. The computer readable medium carries one or more programs which, when executed by the server, cause the server to: acquiring a frame rate of an application on the terminal in response to detecting that a peripheral mode of the terminal is not activated; when the frame rate meets the screen-extinguishing condition, judging whether a user is acquiring screen information of the terminal; and controlling the screen to enter an immediate dimming mode in response to the judgment result that the user does not acquire the screen information of the terminal.
Computer program code for carrying out operations for embodiments of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing has outlined rather broadly the more detailed description of the application in order that the detailed description of the application that follows may be better understood, and in order that the present principles and embodiments may be better understood; also, it is within the scope of the present application to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the application.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (24)

1. A method for managing tasks of a hard disk, comprising:
predicting a host task by using a first prediction model, and predicting to obtain a predicted ending time of the first host task;
Judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than a preset time or not in response to the completion of the execution of the first host task;
And executing the internal task of the hard disk in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being less than the preset time, including:
Calculating the predicted duration of the internal task and the execution speed of the internal task according to the number of flash memory blocks to be recovered, the predicted ending time of the first host task and the predicted starting time of the second host task; executing the internal task at the internal task execution speed until the internal task is completed; the internal task of the hard disk comprises garbage collection, the prediction duration of the internal task is the difference between the prediction starting time of the second host task and the prediction ending time of the first host task, the prediction starting time of the second host task is obtained by the prediction of the first prediction model, and the execution speed of the internal task is the ratio of the number of flash memory blocks to be collected to the prediction duration of the internal task;
and executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time.
2. The method for managing tasks of a hard disk according to claim 1, further comprising, before predicting a host task using the first prediction model:
Judging whether the first prediction model is updated;
and responding to the first prediction model to complete updating, and then using the updated first prediction model to conduct host task prediction.
3. The method for managing a hard disk task of claim 2, wherein,
In response to the first prediction model not being updated, performing host task prediction by using the current first prediction model;
continuing training the first prediction model to obtain corresponding latest first prediction model parameters;
and replacing the current first prediction model parameters with the corresponding latest first prediction model parameters to finish updating the first prediction model.
4. The hard disk task management method according to claim 2, characterized in that the method further comprises:
updating a host task data sample set every first preset time;
Training the first prediction model by adopting the updated host task data sample set to obtain the latest first prediction model parameters;
updating the first predictive model with the updated first predictive model parameters.
5. The method of claim 1, wherein predicting host tasks using a first prediction model comprises:
acquiring a historical host task feature sample from host I/O task information;
And taking the historical host task characteristic sample as a first prediction model input set, inputting the first prediction model, and obtaining a host task prediction result, wherein the host task prediction result comprises a first host task ending time.
6. The method for managing tasks of claim 5 wherein after obtaining the historical host task feature samples from the host I/O task information, further comprises:
Preprocessing the historical host task feature sample;
And replacing the history host task feature sample with the preprocessed history host task feature sample to serve as the first prediction model input set, and inputting the first prediction model to obtain a host task prediction result.
7. The method for managing tasks of a hard disk according to claim 1, wherein said executing internal tasks of a hard disk further comprises:
and starting an internal task, and obtaining the number of flash memory blocks to be recovered.
8. The hard disk task management method according to claim 7, wherein the method further comprises:
And in response to receiving a host task in the execution process of the internal task, executing the internal task at a preset speed until the internal task is completed.
9. The hard disk task management method of any one of claims 1-8 wherein the first predictive model is a time domain convolutional network model.
10. The method of claim 1, wherein the performing the next host task comprises:
acquiring a next host task request from a host task request queue, and judging the type of the next host task request;
And responding to the next host task request as a writing request, and writing the data corresponding to the writing request into the corresponding position of the hard disk according to the writing request.
11. The method for managing a task of a hard disk according to claim 10, wherein writing the data corresponding to the write request to the corresponding location of the hard disk according to the write request comprises:
obtaining corresponding writing task parameters through a second prediction model according to the writing request, wherein the writing task parameters comprise data storage time;
and calling an idle flash memory block according to the data storage duration, and storing the data corresponding to the writing request.
12. The method for managing a hard disk task according to claim 11, wherein obtaining the corresponding write task parameters according to the write request through a second prediction model includes:
Extracting a writing task feature according to the writing request, wherein the writing task feature comprises: logical block address, data capacity;
updating a writing task input set according to the writing task characteristics;
and inputting the writing task input set into the second prediction model to obtain writing task parameters.
13. The method for managing a task of a hard disk according to claim 12, further comprising, after extracting a feature of a task of writing according to the writing request:
preprocessing the writing task characteristics;
and updating the writing task input set by the characteristics of the writing task after processing, so as to replace updating the writing task input set by the characteristics of the writing task.
14. The method for managing a hard disk task according to claim 11, wherein said calling an idle flash block according to the data saving duration, saving data corresponding to the write request, comprises:
ordering the idle flash memory blocks according to the order of the abrasion values to obtain an idle flash memory block queue;
Responding to the data storage time length being greater than or equal to a storage time length threshold value, acquiring an idle flash memory block from one end with a larger abrasion value of the idle flash memory block queue, and storing corresponding data;
And responding to the data storage time length being smaller than the storage time length threshold value, acquiring the idle flash memory block from the end with the smaller abrasion value of the idle flash memory block queue, and storing corresponding data.
15. The method for managing a hard disk task according to claim 11, further comprising, before obtaining the corresponding write task parameters according to the write request through a second prediction model:
judging whether the second prediction model is updated;
and responding to the second prediction model to finish updating, and acquiring the written task parameters by using the updated second prediction model.
16. The method for managing a hard disk task of claim 15, wherein,
Responding to the incomplete updating of the second prediction model, and acquiring the writing task parameters by using the current second prediction model;
Continuing training the second prediction model to obtain corresponding latest second prediction model parameters;
And replacing the current second prediction model parameters with the corresponding latest second prediction model parameters to finish updating the second prediction model.
17. The hard disk task management method of claim 15, further comprising:
updating the task data sample set to be written every second preset time;
training the second prediction model by adopting the updated written task data sample set to obtain the latest second prediction model parameters;
updating the second predictive model with the updated second predictive model parameters.
18. The hard disk task management method of claim 10, further comprising:
and responding to the next host task as a reading request, and executing the reading task corresponding to the reading request.
19. The method for managing a hard disk task according to claim 18, wherein said executing a read task corresponding to said read request comprises:
Reading corresponding data from the flash memory according to the reading request;
and decoding the read corresponding data, and transmitting the decoded data back to the host computer to complete the data reading process.
20. The method of claim 1, wherein the internal tasks are performed in response to the host task queue being empty;
and waiting for the arrival of the next host task in response to completion of the internal task execution.
21. The hard disk task management method of any one of claims 11-17 wherein the second predictive model is a time domain convolutional network model.
22. A hard disk task management device, comprising:
The prediction module is used for predicting the host task by using the first prediction model and obtaining the predicted ending time of the first host task in a prediction way;
The judging module is used for judging whether the difference between the actual end time of the first host task and the predicted end time of the first host task is smaller than the preset time or not in response to the completion of the execution of the first host task;
the internal task execution module is configured to execute an internal task of the hard disk in response to a difference between an actual end time of the first host task and a predicted end time of the first host task being less than the preset time, including:
Calculating the predicted duration of the internal task and the execution speed of the internal task according to the number of flash memory blocks to be recovered, the predicted ending time of the first host task and the predicted starting time of the second host task; executing the internal task at the internal task execution speed until the internal task is completed; the internal task of the hard disk comprises garbage collection, the prediction duration of the internal task is the difference between the prediction starting time of the second host task and the prediction ending time of the first host task, the prediction starting time of the second host task is obtained by the prediction of the first prediction model, and the execution speed of the internal task is the ratio of the number of flash memory blocks to be collected to the prediction duration of the internal task;
And the host task execution module is used for executing the next host task in response to the difference between the actual end time of the first host task and the predicted end time of the first host task being greater than or equal to the preset time.
23. A computer-readable storage medium, having stored thereon a hard disk task management program which, when executed by a processor, implements the hard disk task management method of any one of claims 1 to 21.
24. A hard disk task management system comprising a memory, a processor and a hard disk task management program stored on the memory and executable on the processor, wherein the processor implements the hard disk task management method of any one of claims 1 to 21 when executing the hard disk task management program.
CN202410226776.9A 2024-02-29 2024-02-29 Method, device, storage medium and system for managing hard disk tasks Active CN117806837B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410226776.9A CN117806837B (en) 2024-02-29 2024-02-29 Method, device, storage medium and system for managing hard disk tasks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410226776.9A CN117806837B (en) 2024-02-29 2024-02-29 Method, device, storage medium and system for managing hard disk tasks

Publications (2)

Publication Number Publication Date
CN117806837A CN117806837A (en) 2024-04-02
CN117806837B true CN117806837B (en) 2024-06-07

Family

ID=90434947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410226776.9A Active CN117806837B (en) 2024-02-29 2024-02-29 Method, device, storage medium and system for managing hard disk tasks

Country Status (1)

Country Link
CN (1) CN117806837B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764714A (en) * 2019-11-06 2020-02-07 深圳大普微电子科技有限公司 Data processing method, device and equipment and readable storage medium
CN111475298A (en) * 2020-04-03 2020-07-31 北京字节跳动网络技术有限公司 Task processing method, device, equipment and storage medium
CN111913648A (en) * 2019-05-09 2020-11-10 深圳大普微电子科技有限公司 Data processing method and device for solid state disk
CN113971137A (en) * 2020-07-22 2022-01-25 华为技术有限公司 Garbage recovery method and device
CN116185300A (en) * 2023-03-09 2023-05-30 复旦大学 Software and hardware implementation method for completing efficient garbage collection of solid state disk by deep learning at host end
CN117472285A (en) * 2023-12-26 2024-01-30 深圳市领德创科技有限公司 Intelligent operation acceleration method for solid state disk, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230112362A (en) * 2022-01-20 2023-07-27 에스케이하이닉스 주식회사 Data Processing System and Operation Method Thereof, Storage Device Therefor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111913648A (en) * 2019-05-09 2020-11-10 深圳大普微电子科技有限公司 Data processing method and device for solid state disk
CN110764714A (en) * 2019-11-06 2020-02-07 深圳大普微电子科技有限公司 Data processing method, device and equipment and readable storage medium
CN111475298A (en) * 2020-04-03 2020-07-31 北京字节跳动网络技术有限公司 Task processing method, device, equipment and storage medium
CN113971137A (en) * 2020-07-22 2022-01-25 华为技术有限公司 Garbage recovery method and device
CN116185300A (en) * 2023-03-09 2023-05-30 复旦大学 Software and hardware implementation method for completing efficient garbage collection of solid state disk by deep learning at host end
CN117472285A (en) * 2023-12-26 2024-01-30 深圳市领德创科技有限公司 Intelligent operation acceleration method for solid state disk, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A method for reducing garbage collection overhead of SSD using machine learning algorithms;Jung Kyu Park;《2017 International Conference on Information and Communication Technology Convergence》;20171214;全文 *
Reducing write amplification in flash by death-time prediction of logical block addresses;Chandranil Chakraborttii;《Proceedings of the 14th ACM International Conference on Systems and Storage》;20210614;全文 *
一种基于预搜索的自适应闪存垃圾回收算法;李恒恒;岳春生;胡泽明;;小型微型计算机***;20151015(第10期);全文 *

Also Published As

Publication number Publication date
CN117806837A (en) 2024-04-02

Similar Documents

Publication Publication Date Title
US11038984B2 (en) Data prefetching for large data systems
US9256371B2 (en) Implementing reinforcement learning based flash control
CN105339908B (en) Method and apparatus for supporting long-time memory
Bateni et al. Predjoule: A timing-predictable energy optimization framework for deep neural networks
KR102356523B1 (en) Data storage device and operating method thereof
CN110727685B (en) Data compression method, equipment and storage medium based on Cassandra database
US20150195371A1 (en) Changing a cache queue based on user interface pointer movement
CN109521943B (en) Cloud database instance distribution method and related product
CN111143039B (en) Scheduling method and device of virtual machine and computer storage medium
US11763147B2 (en) Data management device for supporting high speed artificial neural network operation by using data caching based on data locality of artificial neural network
JP2018526746A (en) Method and apparatus for optimizing database transactions
US20220327018A1 (en) Behavior-driven die management on solid-state drives
US20170123975A1 (en) Centralized distributed systems and methods for managing operations
CN109313604A (en) Compress the dynamic configuration of virtual memory
CN109284233A (en) A kind of rubbish recovering method and relevant apparatus of storage system
CN116107754A (en) Memory management method and system for deep neural network
CN114968839A (en) Hard disk garbage recycling method, device and equipment and computer readable storage medium
WO2016048919A1 (en) Storage device management in computing systems
CN117194502B (en) Database content cache replacement method based on long-term and short-term memory network
CN117806837B (en) Method, device, storage medium and system for managing hard disk tasks
WO2021047398A1 (en) Method and device for storage block reclaiming, storage medium, and electronic device
CN111552652B (en) Data processing method and device based on artificial intelligence chip and storage medium
KR101575369B1 (en) Method for writing to and erasing a non-volatile memory
US20230297382A1 (en) Cache line compression prediction and adaptive compression
CN110727405A (en) Data processing method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant