EP3751743A1 - Storage system and storage control method - Google Patents

Storage system and storage control method Download PDF

Info

Publication number
EP3751743A1
EP3751743A1 EP20163179.3A EP20163179A EP3751743A1 EP 3751743 A1 EP3751743 A1 EP 3751743A1 EP 20163179 A EP20163179 A EP 20163179A EP 3751743 A1 EP3751743 A1 EP 3751743A1
Authority
EP
European Patent Office
Prior art keywords
statistical
data
kinds
statistical amount
compressor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP20163179.3A
Other languages
German (de)
English (en)
French (fr)
Inventor
Takahiro Naruko
Hiroaki Akutsu
Akifumi Suzuki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of EP3751743A1 publication Critical patent/EP3751743A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3059Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3068Precoding preceding compression, e.g. Burrows-Wheeler transformation
    • H03M7/3079Context modeling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction

Definitions

  • the present invention generally relates to storage control for compressing and decompressing data.
  • a storage system reducing an amount of data is known (for example, JP 2007-199891A ).
  • the type of storage system generally reduces an amount of data by compressing the data.
  • As one of the known compression methods there is known a method of making text strings of which a frequency of appearance is high in predetermined block units as a dictionary and substituting the text strings with codes with smaller sizes, as in a run length method.
  • An irreversible compression technology is known as a technology for reducing an amount of data more than a reversible compression such as a run length method.
  • a reversible compression such as a run length method.
  • an amount of data is reduced by converting sensor data into a power spectrum and a phase spectrum and excluding values of power and a phase from a recording target at a frequency less than a threshold in which the power value is preset.
  • the autoencoder includes a neural network and can learn data of a specific field to perform compression specialized in learning data and data of the same field.
  • the neural network is learned so that a function value set in a loss function is small.
  • a function an error function that indexes an error between an input and an output is generally set as the loss function.
  • the inventors of the present specification have obtained the following knowledge as a thorough examination result of a storage system according to a comparative example in which an autoencoder is adopted as a compressor/decompressor.
  • a storage system that performs irreversible compression on time-series data using a compressor/decompressor based on machine learning calculates a statistical amount value of each of one or more kinds of statistical amounts based on one or more parameters in relation to original data (time-series data input to a compressor/decompressor) and calculates a statistical amount value of each of the one or more kinds of statistical amounts based on the one or more kinds of parameters in relation to decompressed data (time-series data output from the compressor/decompressor) corresponding to the original data.
  • the machine learning of the compressor/decompressor is performed based on the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the original data and the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the decompressed data.
  • the information that has no influence on accuracy requisite which is accuracy of a statistical amount (that is, information other than information necessary to calculate the statistical amount) which is calculated based on one or more parameters can be excluded from a compression result (post-compression data). Therefore, in relation to the same accuracy requisite of the statistical amount, a compression ratio is improved more than in the comparative example.
  • an “interface device” may be one or more interface devices.
  • the one or more interface devices may be at least one of the following devices:
  • a "memory” is one or more memory devices and may be generally a main storage device. At least one memory device in the memory may be a volatile memory device or may be a nonvolatile memory device.
  • a “persistent storage device” is one or more persistent storage devices.
  • the persistent storage device is generally a nonvolatile storage device (for example, an auxiliary storage device) and is specifically, for example, a hard disk drive (HDD) or a solid-state drive (SSD).
  • HDD hard disk drive
  • SSD solid-state drive
  • a “storage device” may be a physical storage device such as a persistent storage device or may be a logical storage device associated with a physical storage device.
  • a "processor” is one or more processor devices.
  • At least one processor device is generally a microprocessor device such as a central processing unit (CPU) and may be another type of processor device such as a graphics processing unit (GPU).
  • At least one processor device may be a single core or may be a multi-core.
  • At least one processor device may be a processor core.
  • At least one processor device may be an extended processor device such as a hardware circuit (for example, a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC)) that performs some or all of the processes.
  • FPGA field-programmable gate array
  • ASIC application specific integrated circuit
  • each table is exemplary.
  • One table may be divided into two or more tables. Some or all of two or more tables may be one table.
  • a subject of the process may be considered to be a processor (or a device including the processor) .
  • the program may be installed to a device such as a calculator from a program source.
  • the program source may be, for example, a recording medium that can be read (for example, non-transitory) by a program distribution server or a calculator.
  • two or more programs may be realized as one program or one program may be realized as two or more programs.
  • a “storage system” may be an on-premises storage system, may be a cloud storage system, or may be one or both of an edge system (for example, a system collecting data from a plurality of sensors) and a core system (for example, a system receiving data collected from a plurality of sensors from an edge system).
  • an edge system for example, a system collecting data from a plurality of sensors
  • a core system for example, a system receiving data collected from a plurality of sensors from an edge system
  • an autoencoder is adopted as an example of a "compressor/decompressor," but a compressor/decompressor other than an autoencoder may be adopted.
  • An example of the compressor/decompressor may be a data model.
  • the data model may be one of a data generation rule, a model, and execution entity.
  • the data model may be a binary string expressing a mathematical expression process, a waveform shape, regularity of a probability distribution, or the like.
  • original data time-series data before compression
  • decompressed data data obtained by decompressing the data after the original data is compressed
  • a generation model for example, a Gaussian mixture models (GMM), a hidden Markov model (HMM), a stochastic context-free grammar (SCFG), a generative adversarial nets (GAN), or a variational auto encoder (VAE)
  • GMM Gaussian mixture models
  • HMM hidden Markov model
  • SCFG stochastic context-free grammar
  • GAN generative adversarial nets
  • VAE variational auto encoder
  • genetic programming or the like
  • a model compression such as a mimic model may be applied to reduce an amount of information of the data model.
  • sensor data that has a plurality of sensor data sets corresponding to a plurality of times is adopted as an example of the "time-series data".
  • the "sensor data set” may be, for example, a data set that includes information indicating a sensor ID, a measured value, and a time (for example, a measurement time or a granted time stamp) .
  • the "data set” is one logical electronic data block viewed from a program such as an application program and may be, for example, one of a record, a file, a key value pair, and a tuple.
  • the invention may be applied to time-series data other than the sensor data.
  • a set of images captured by a camera may be regarded as time-series data. In this case, data per time becomes a 3-dimensional tensor.
  • the sensor data set is, for example, a data set including an ID of a camera, a 3-dimensional tensor indicating an image, and information indicating an imaging time.
  • FIG. 1 is a diagram illustrating an overview of an exemplary system to which the invention is applied. The invention can be applied to the system exemplified in FIG. 1 .
  • a storage node 100, a sensor server 120, and a management server 130 are connected to a switch 140.
  • the switch 140 connects the storage node 100, the sensor server 120, and the management server 130 to each other.
  • the storage node 100, the sensor server 120, and the management server 130 may be independent computers, may be virtual computers such as different virtual machines operating on the same computer, or may be configured by a combination thereof.
  • Each of the storage node 100, the sensor server 120, and the management server 130 may not necessarily be one node or server and may be configured by a plurality of nodes or servers.
  • a communication network for example, a local area network (LAN), a wide area network (WAN), or the Internet
  • LAN local area network
  • WAN wide area network
  • the sensor server 120 is a computer that includes hardware resources such as a processor, a memory, and an interface device and software resources such as drivers of sensors.
  • the sensor server 120 aggregates data (sensor data sets) generated by one sensor or a plurality of sensors 110 and transmits the data to the storage node 100.
  • the management server 130 is a computer that includes hardware resources such as a processor, a memory, an interface device, and local input/output devices and software resources such as a management program.
  • the management server 130 displays management screens illustrated in FIGS. 2 and 3 via a local output device (generally, a display device) in accordance with the management program.
  • the management server 130 has a function of responding with device setting input on a statistical amount parameter setting screen 200 (see FIG. 2 ) by a system user (for example, a manager) in response to a request from the storage node 100.
  • the management server 130 may transmit an output to another computer connected by a network instead of including a local output device. Instead of including the local input device, an input may be received from another computer via the switch 140.
  • the storage node 100 includes a front-end interface 101, a processor 102, a cache memory 103, a RAM 104, a back-end interface 107, and a switch 108 connecting them to each other.
  • the front-end interface 101 and the back-end interface 107 are examples of an interface device.
  • the cache memory 103 and the RAM 104 are examples of a storage device (for example, a memory).
  • the front-end interface 101 is a device that connects the storage node 100 to the sensor server 120 and the management server 130 so that the storage node 100, the sensor server 120, and the management server 130 can communicate with each other via the switch 140.
  • the processor 102 controls the entire storage node 100 via the switch 108 based on one or more programs recorded in a program area 105 of the RAM 104 and management information recorded in a management area 106 of the RAM 104.
  • the processor 102 may be an accelerator such as a GPU or an FPGA in addition to a general arithmetic processor such as a CPU or may be a combination thereof.
  • the cache memory 103 is a storage device that temporarily retains writing target data until a compression process is completed.
  • the cache memory 103 may include a volatile element such as a dynamic random access memory (DRAM) or may be a nonvolatile drive such as an HDD or an SSD.
  • DRAM dynamic random access memory
  • the back-end interface 107 is a device that connects the storage node 100 to the nonvolatile drive 150 such as an HDD or an SSD so that the storage node 100 and the nonvolatile drive 150 can communicate with each other.
  • the front-end interface 101, the processor 102, the cache memory 103, the RAM 104, and the back-end interface 107 described above may be configured as an ASIC or an FPGA by one semiconductor element or may be configured so that a plurality of individual integrated circuits (ICs) are connected to each other.
  • the storage node 100 and the management server 130 may be constituent elements of the storage system or one or more drives 150 may be constituent elements of the storage system.
  • FIG. 2 is a diagram illustrating a statistical amount parameter setting screen 200 output to a local output device of the management server 130.
  • the statistical amount parameter setting screen 200 is an example of a first interface (for example, a first user interface (UI)) and is, for example, a graphical user interface (GUI).
  • the statistical amount parameter setting screen 200 includes a statistical amount parameter input field 201 and a completion button 204.
  • the statistical amount parameter input field 201 is an area of a table format in which a user is allowed to input a kind of statistical amount and a parameter.
  • Each row 202 of the statistical amount parameter input field 201 corresponds to one combination of a kind of statistical amount and a parameter.
  • Each column 203 of a statistical amount parameter input field corresponds to a kind of statistical amount or a parameter.
  • An example of one combination is, for example, as follows:
  • the parameter items such as "number of samples" or “calculation frequency” are merely exemplary. Thus, parameter items other than the parameter items exemplified in FIG. 2 may be added or at least one of the exemplified parameter items may be omitted.
  • the statistical amount parameter input field 201 may be mounted in an interface such as a drop-down list or a radio button in addition to an interface in which a user is allowed to designate a kind of statistical amount or a parameter in a text form.
  • the statistical amount parameter setting screen 200 may allow a user to designate a kind of statistical amount and a parameter in accordance with a data description language such as a JavaScript Object Notation (JSON) .
  • JSON JavaScript Object Notation
  • the completion button 204 is a button for receiving notification of completion of an input from a user (for example, a system user) to the statistical amount parameter input field 201.
  • FIG. 3 is a diagram illustrating a statistical amount quality checking screen 300 output to the local output device of the management server 130.
  • the statistical amount quality checking screen 300 is an example of a second interface (for example, a second UI) and is, for example, a graphical user interface (GUI).
  • the statistical amount quality checking screen 300 is a screen on which the size of an error occurring due to compression with regard to each statistical amount designated in the statistical amount parameter input field 201 is displayed to a user.
  • the statistical amount quality checking screen 300 includes a data ID input field 301, a display button 302, and a statistical amount quality field 303.
  • the data ID input field 301 is an input field for designating data in which an error of a statistical amount is displayed.
  • a section of a time at which the sensor 110 generates data may be designated, a number indicating how many times the sensor server 120 transmits data to the storage node 100 may be designated, or any information for specifying a range of sensor data (an example of time-series data) may be designated.
  • the display button 302 is a button for allowing a user to decide information input to the data ID input field 301.
  • the management server 130 transmits the information input to the data ID input field 301 to the storage node 100 and sets a value indicated by a response from the storage node 100 in the statistical amount quality field 303.
  • a row 304 of the statistical amount quality field 303 corresponds to a combination of a kind of statistical amount and a parameter. Therefore, the row 304 has a one-to-one correspondence with the row 202 of the statistical amount parameter input field 201.
  • Each column 305 of the statistical amount quality field 303 corresponds to an error of the statistical amount (for example, various errors such as a mean square error and a maximum error) in the sensor data specified from the information input to the data ID input field 301.
  • the indexing of the statistical amount is not limited to the exemplified mean square error and the maximum error. Any index may be added.
  • FIG. 4 is a diagram illustrating a configuration of the RAM 104.
  • the RAM 104 includes the program area 105 in which a program executed by the processor 102 is stored and the management area 106 in which management information read and written by the processor 102 in response to a command of the program is stored.
  • One or more programs stored in the program area 105 include, for example, an initial setting operation program 400, a statistical setting program 401, a loss setting program 402, a data writing program 403, a data reading program 404, a statistical amount quality calculation program 405, a designated statistical amount calculation program 406, a statistical amount calculation program 407, a loss function program 408, a database program 409, and a compression/decompression program 420.
  • the initial setting operation program 400 initializes the management information.
  • the statistical setting program 401 sets the designated statistical amount calculation program 406.
  • the loss setting program 402 sets the loss function program 408.
  • the data writing program 403 performs learning of a compressor/decompressor, compression of writing target data, and writing of a compression result (post-compression data) in the nonvolatile drive 150 in response to a request from the sensor server 120.
  • the data reading program 404 reads the compression result from the nonvolatile drive 150 in response to a request from the sensor server 120, the management server 130, or the like, decompresses the data, and responds with the decompressed data.
  • the statistical amount quality calculation program 405 calculates an error occurring in the data due to a compression/decompression process with regard to the statistical amount designated by the user via the statistical amount parameter setting screen 200.
  • the designated statistical amount calculation program 406 is a program that calculates a value of the statistical amount designated by the user via the statistical amount parameter setting screen 200 with regard to the data given as an input to the program.
  • the designated statistical amount calculation program 406 sets the initial setting operation program 400 based on the statistical amount designated by the user on the statistical amount parameter setting screen 200.
  • the statistical amount calculation program 407 is a set of programs calculating statistical amounts such as a mean, a dispersion, and a median.
  • the program receives parameters such as a target sensor, the number of samples, a calculation frequency in addition to the calculation target sensor data of the statistical amount as arguments.
  • the size of sensor data (for example, the number of sensor data sets that form the sensor data) is greater than the number of samples when the statistical amount is calculated. Therefore, at least one statistical amount calculation program 407 partition the sensor data into windows of the designated number of samples and calculates the statistical amount of each window. Therefore, values of a plurality of statistical amounts can be calculated from one piece of sensor data. Therefore, a return value of the statistical amount calculation program 407 generally becomes a tensor formed from the statistical amounts.
  • the calculation of the statistical amount is not limited to calculation performed between data with different times. For example, when data at each time of time-series data is a tensor of an image or the like, a statistical amount of the tensor may be calculated. Each sensor may be divided into smaller tensors and a statistical amount of each sub-tensor may be calculated.
  • the loss function program 408 is a program that calculates a value of a loss function used to learn a neural network using the statistical amounts calculated by the designated statistical amount calculation program 406.
  • the loss function program 408 is set to the time of initialization by the initial setting operation program 400.
  • the database program 409 is a program of software of a database managing the compression result.
  • the database program 409 may have a function of writing the compression result in the plurality of nonvolatile drives 150 in a distributed manner or a function of storing data in the plurality of storage nodes 100 in a distributed manner when the number of storage nodes 100 is plural.
  • the database program 409 may provide a function and an application programming interface (API) to provide storing and reading of data to other programs.
  • API application programming interface
  • the compression/decompression program 420 provides a function of learning a neural network included in the compressor/decompressor for the sensor data using designated learning data and the loss function, a function of compressing the sensor data using the learned compressor/decompressor, and a function of decompressing the sensor data from the post-compression data.
  • the compression/decompression program 420 may include a compressor/decompressor.
  • the management information stored in the management area 106 includes, for example, a statistical amount parameter management table 410 and a statistical amount quality management table 411. These tables may be mounted with a data structure other than a table, such as a hash table or a binary tree.
  • FIG. 5 is a diagram illustrating the statistical amount parameter management table 410.
  • the statistical amount parameter management table 410 retains parameters of the statistical amounts set by the user via the management server 130.
  • Each row 501 of the statistical amount parameter management table 410 corresponds to each statistical amount designated by the user on the statistical amount parameter setting screen 200 of the management server 130.
  • a column 502 of the statistical amount parameter management table 410 a kind of statistical amount designated by the user on the statistical amount parameter setting screen 200 of the management server 130 is stored.
  • Columns 503 of the statistical amount parameter management table 410 correspond to the parameters (for example, a calculation target sensor, the number of samples when the statistical amount is calculated, the calculation frequency of the statistical amount, and the like) of the statistical amounts designated by the user on the statistical amount parameter setting screen 200 of the management server 130.
  • FIG. 6 is a diagram illustrating the statistical amount quality management table 411.
  • the statistical amount quality management table 411 is a table in which a calculation result of the statistical amount quality calculation program is retained. Each row 601 of the statistical amount quality management table 411 corresponds to each statistical amount designated by the user in the management server 130. Each column 602 of the statistical amount quality management table corresponds to an index for measuring an error of a statistical amount preset in the storage node 100, such as a mean square error or a maximum error. The index is not limited to the mean square error or the maximum error. Any index calculated by a function that calculates one scalar value using two tensors as inputs may be used.
  • FIG. 7 is a flowchart illustrating an initial setting process.
  • the processor 102 starts the initial setting operation program 400 using first activation of the storage node 100, pressing of a reset button of the storage node 100 by the user, an instruction from the management server 130, or the like as a trigger (S700).
  • the processor 102 instructs the management server 130 to a process of displaying the statistical amount parameter setting screen 200 in the local output device via the switch 108, the front-end interface 101 and the switch 140.
  • the management server 130 receives the instruction, the management server 130 outputs the statistical amount parameter setting screen 200 to the local output device so that the user inputs a type of statistical amount and a parameter.
  • the processor 102 requests the value input to the statistical amount parameter input field 201 from the management server 130 to receive the responded value via the switch 108, the front-end interface 101, and the switch 140.
  • the management server 130 responds to the storage node 100 with the parameters in the statistical amount parameter input field 201 when the user presses the completion button 204.
  • the management server 130 waits until the user presses the completion button 204.
  • the management server 130 responds to the storage node 100 with the parameters in the statistical amount parameter input field 201.
  • S702 is a step in which extraction of a pull type information in which the processor 102 requests the parameters from the management server 130 is assumed. However, instead of the extraction of the pull type information, extraction of push type information in which the management server 130 actively transmits the parameters to the storage node 100 may be adopted. In this case, the processor 102 may wait until the front-end interface 101 receives the parameters transmitted by the management server 130.
  • the processor 102 stores the parameters acquired in S702 in the statistical amount parameter management table 410.
  • the processor 102 calls the statistical amount setting program 401.
  • the processor 102 calls the loss setting program 402.
  • the processor 102 ends the initial setting operation program 400 (S706).
  • FIG. 8 is a flowchart illustrating a first program setting process.
  • the first program setting process is performed by the statistical setting program 401.
  • the statistical setting program 401 starts by being called in S704 of FIG. 7 (S800) .
  • the processor 102 initializes the designated program area by writing a program meaning "NO process (NOP) " in the designated program area (one area in the RAM 104) (not illustrated).
  • NOP NO process
  • S802 to S806 are performed for each row 501 of the statistical amount parameter management table 410.
  • one row 501 is taken as an example (referred to as a "loop target row" in the description of FIG. 8 ).
  • the processor 102 specifies a kind of statistical amount from the column 502 of the loop target row of the statistical amount parameter management table 410.
  • the processor 102 acquires the parameters of the statistical amount from the columns 503 of the loop target row of the statistical amount parameter management table 410.
  • the processor 102 adds "a process of calling the statistical amount calculation program 407 calculating the statistical amount value belonging to the kind of statistical amount specified in S803 using the parameters acquired in S804 as arguments" to the program (the program written in S801) in the designated program area.
  • the processor 102 adds "a process of ending the designated statistical amount calculation program 406 using the statistical amount value group (one or more statistical amount values) calculated in S805 as a return value" to the program (the program added in S805) in the designated program area.
  • the processor 102 ends the statistical setting program 401 (S808).
  • the designated statistical amount calculation program 406 is generated in the designated program area.
  • FIG. 9 is a flowchart illustrating a second program setting process.
  • the second program setting process is performed by the loss setting program 402.
  • the loss setting program 402 is started by being called in S705 of FIG. 7 (S900).
  • the processor 102 writes a program indicating "NO process (NOP) " in a loss function program area (one area in the RAM 104) and initializes the loss function program area.
  • the processor 102 adds "a process of calling the designated statistical amount calculation program 406 using the input data of the compressor/decompressor as arguments" to the program (the program written in S901) in the loss function program area.
  • the processor 102 adds "a process of calling the designated statistical amount calculation program 406 using the output data of the compressor/decompressor as arguments" to the program (the program added in S902) in the loss function program area.
  • the processor 102 adds "a process of indexing each of one or more errors of an error group between a statistical amount value group A which is a statistical amount value group calculated in the process added in S902 and a statistical amount value group B which is a statistical amount value group calculated in the process added in S903" to the program (the program added in S903) in the loss function program area.
  • a mean square error between the statistical amount value group A and the statistical amount value group B can be exemplified.
  • An indexing method is not limited thereto and any method can be used as long as a function that calculates a scalar value from two groups of the statistical amount groups formed by a plurality of statistical amounts is used.
  • the indexing method may not be statically set in the storage node 100 and the management server 130 may allow the user to designate an indexing method via the statistical amount parameter setting screen 200.
  • the user may be allowed to designate an indexing method f_n of an error and a target value g_n with regard to each statistical amount n via the statistical amount parameter setting screen 200, a statistical amount value a_n of the statistical amount value group A and a statistical amount value b_n of the statistical amount value group B are indexed (f_n (a_n, b_n)) in accordance with a method of designating the error, the indexed value and a maximum value max (f_n (a_n, b_n), g_n) of the target value g_n are obtained, and a sum Sum (max (f_n (a_n, b_n), g_n)) of all the statistical amounts of the values can be set as an index of a final error.
  • a value of the loss function does not decrease even when the compressor/decompressor is learned so that the error index f_n (a_n, b_n) of each statistical amount is less than the target value g_n. Therefore, the neural network included in the compressor/decompressor is learned so that another statistical amount other than a statistical amount of which an error index is already less than a target value or the error index f-n (a_n, b_n) of an error of a statistical amount n at another time is improved.
  • the statistical amount values a_n and b_n may be vectors that have lengths in a time direction and correspond to each kind of statistical amount or may be values at that time.
  • the tensor may be divided into smaller tensors, an error of each sub-tensor may be calculated, and a maximum value with respect to the target value may be extracted.
  • each image can be partitioned into patches and quality (for example, a peak signal-to-noise ratio or multi-scale structured similarity) between a patch a_n of an original image and a patch b_n of a post-compression or decompression image can be set to f_n for each patch.
  • quality for example, a peak signal-to-noise ratio or multi-scale structured similarity
  • a term for improving a compression ratio for example, a term representing entropy of an output of the compressor, is added to the loss function in some cases in addition to a term representing an error.
  • trade-off between a compression ratio and an error can be adjusted in accordance with a ratio of a term of the error to a term of the compression ratio, that is, a coefficient multiplied to the term of the compression ratio (hereinafter referred to as a compression coefficient).
  • the compression coefficient is manually set.
  • the compression coefficient is controlled such that a small value is set as the compression coefficient when there is a statistical amount of which an error is less than a target value, and a large value is set as the compression coefficient in the other cases.
  • the processor 102 adds "a process of ending the loss function program 408 using the index calculated in the process added in S904 as a return value" to the program (the program added in S904) in the loss function program area.
  • the processor 102 ends the loss setting program 402 (S906) .
  • the loss function program 408 is generated in the loss function program area.
  • the loss function program 408 set in the steps of FIG. 9 is a program that receives input data and output data of the compressor/decompressor as inputs and outputs an error between the two inputs with respect to the statistical amount input to the statistical amount parameter setting screen 200 by the user.
  • the loss function program 408 is set so that the error of the statistical amount designated by the user is calculated as the loss function.
  • the processor 102 may add a process of adding an index other than an error of a statistical amount, such as a mean square error between input data and output data of the compressor/decompressor or an amount of post-compression data to an error of a statistical amount, to the loss function program area.
  • an index other than an error of a statistical amount such as a mean square error between input data and output data of the compressor/decompressor or an amount of post-compression data to an error of a statistical amount
  • FIG. 10 is a flowchart illustrating data writing process .
  • the data writing process is performed by the data writing program 403.
  • the processor 102 starts the data writing program 403 using reception of data written from the sensor server 120 by the front-end interface 101 of the storage node 100 as a trigger (S1000).
  • the processor 102 stores the writing target data received by the front-end interface 101 in the cache memory 103.
  • the processor 102 learns a neural network included in the compressor/decompressor in accordance with the compression/decompression program 420.
  • learning data sensor data stored in the cache memory 103 in S1001 is used.
  • a value of the loss function is calculated by executing the loss function program 408 using a pair of the original data (the sensor data in the cache memory 103) and the decompressed data (the output data obtained from the compressor/decompressor when the original data is an input of the compressor/decompressor) as an argument.
  • a value of a gradient in the loss function may be calculated by generally known automatic differentiation or may be calculated by generating a program that calculates a partial differential value in the loss setting program 402 and executing the program. S1002 may be skipped and the compressor/decompressor learned in previous S1002 may be reused in a subsequent step.
  • the processor 102 compresses the sensor data stored in the cache memory 103 in S1001 using the compressor/decompressor learned in S1002 through a compression function provided by the compression/decompression program 420.
  • the post-compression data includes all kinds of information necessary to decompress data.
  • the information included in the post-compression data differs depending on a type of the compressor/decompressor. For example, when the compressor/decompressor is configured by an autoencoder, a value of a parameter of a decoder and a value of an intermediate vector are included in the post-compression data.
  • the processor 102 calls the statistical amount quality calculation program 405 using the post-compression data obtained in S1003 as an argument.
  • the processor 102 registers the post-compression data obtained in S1003 in a database provided by the database program 409.
  • the processor 102 acquires all the values of the statistical amount quality from the statistical amount quality management table 411 and registers the values in the database provided by the database program 409.
  • the processor 102 ends the data writing program 403 (S1007) .
  • FIG. 11 is a flowchart illustrating a statistical amount quality calculation process.
  • the statistical amount quality calculation process is performed by the statistical amount quality calculation program 405.
  • the statistical amount quality calculation program 405 is started by being called in S1004 of FIG. 10 (S1100).
  • the processor 102 decompresses the sensor data from the post-compression data given as an argument using a decompression function provided by the compression/decompression program 420.
  • a method of decompressing the sensor data differs depending on a type of the compressor/decompressor. For example, when the compressor/decompressor is configured by an autoencoder, the sensor data is obtained by inputting a value of an intermediate vector included in the post-compression data to a decoder in which values of parameters included in the post-compression data are set.
  • S1102 to S1111 are performed for each row 501 of the statistical amount parameter management table 410.
  • one row 501 is taken as an example (referred to as a "loop target row" in the description of FIG. 11 ).
  • the processor 102 specifies a kind of statistical amount from the columns 502 of the loop target row of the statistical amount parameter management table 410.
  • the processor 102 acquires parameters of the statistical amount from the columns 503 of the loop target row of the statistical amount parameter management table 410.
  • the processor 102 calls the statistical amount calculation program 407 corresponding to the kind of statistical amount specified in S1103 by setting the sensor data decompressed in S1101 and the parameters acquired in S1104 as arguments.
  • the processor 102 calls the statistical amount calculation program 407 corresponding to the kind of statistical amount specified in S1103 by setting the sensor data in the cache memory 103 and the parameters acquired in S1104 as arguments.
  • S1107 to S1110 are performed for each column 602 of the statistical amount parameter management table 411.
  • one column 602 is taken as an example (referred to as a "loop target column" in the description of FIG. 11 ).
  • the processor 102 calculates an error corresponding to the index corresponding to the loop target column from the tensor of the statistical amount calculated in S1105 and the tensor of the statistical amount calculated in S1106. For example, when the loop target column is a column representing "mean square error, " the processor 102 calculates a mean square error between two tensors.
  • the processor 102 stores the error calculated in S1108 in the loop target column 602 of the row 601 corresponding to the loop target row in the statistical amount quality management table 411.
  • the processor 102 ends the statistical amount quality calculation program 405 (S1112).
  • FIG. 12 is a flowchart illustrating a data reading process.
  • the data reading process is performed by the data reading program 404.
  • the processor 102 starts the data reading program 404 using reception of a data reading request by the front-end interface 101 from a computer such as the sensor server 120 or the management server 130 as a trigger (S1200).
  • a source of the data reading request is not limited to the sensor server 120 and the management server 130 and may be any computer connected to the storage node 100 via the switch 140.
  • the processor 102 analyzes the reading request received by the front-end interface 101 and acquires reading target post-compression data from the database provided by the database program 409.
  • the processor 102 decompresses the sensor data from the post-compression data acquired in S1201 using the decompression function provided by the compression/decompression program 420.
  • a method of decompressing the sensor data differs depending on a type of the compressor/decompressor. For example, when the compressor/decompressor is configured by an autoencoder, the sensor data is obtained by inputting a value of an intermediate vector included in the post-compression data to a decoder in which values of parameters included in the post-compression data are set.
  • the processor 102 responds to the computer of the request source via the front-end interface 101 with the sensor data decompressed in S1202.
  • the processor 102 ends the data reading program 404 (S1204).
  • the management server 130 transmits a request for statistical amount quality to the storage node 100 along with a value set in the data ID input field 301.
  • the processor 102 starts a process of responding to the management server 130 with the statistical amount quality using reception of the request of the statistical amount quality by the front-end interface 101 of the storage node 100 as a trigger.
  • the processor 102 acquires a data ID received by the front-end interface 101. Subsequently, of the information regarding the statistical amount quality registered in the database in S1006, the processor 102 acquires information corresponding to the ID from the database provided by the database program 409. Subsequently, the processor 102 responds to the management server 130 with the statistical amount quality acquired from the database via the front-end interface 101.
  • the management server 130 When the management server 130 receives the response from the storage node 100, the management server 130 displays the value of the received statistical amount quality in the statistical amount quality field 303 of the statistical amount quality checking screen 300.
  • the statistical amount quality responding process may be performed, for example, when at least one of the data reading program 404 and the statistical amount quality calculation program 405 is executed by the processor 102.
  • the storage system 50 includes the interface device 58 (for example, the back-end interface 107) connected to the storage device 59 (for example, the drive 150); and the processor 52 (for example, the processor 102) configured to perform irreversible compression on time-series data (for example, sensor data) using the compressor/decompressor 13 based on machine learning and store post-compression data which is time-series data subjected to the irreversible compression in the storage device 59.
  • An input to the compressor/decompressor 13 is the original data 16 which is pre-compression time-series data
  • an output of the original data 16 from the compressor/decompressor is the decompressed data 17 which is data obtained by decompressing the post-compression data of the original data 16.
  • the processor 52 For each of one or more kinds of statistical amounts, the processor 52 provides the first interface 55 that is an interface receiving an input of one or more parameters necessary to calculate the statistical amount.
  • the machine learning of the compressor/decompressor 13 is performed on each of one or more kinds of statistical amounts based on one or more parameters input via the first interface 55.
  • the post-compression data of the original data 16 is compressed using the compressor/decompressor 13 after the machine learning, and the post-compression data of the original data 16 is stored in the storage device 59.
  • information that has no influence on accuracy requisite which is accuracy of a statistical amount which is calculated based on the parameters designated via the first interface 55 that is, information other than the information necessary to calculate the statistical amount
  • the post-compression data by the compressor/decompressor 13 learned based on the parameters designated via the first interface 55 does not include the information that has no influence on accuracy requisite of the statistical amount. Therefore, the compression ratio is improved more than in the comparative example in relation to the same accuracy requisite of the statistical amount.
  • the above-described storage system 50 can control the statistical amount remaining in the compression result based on designation of the user.
  • the above-described storage system 50 can also be applied to irreversible compression of time-series data other than the sensor data, for example, sound data.
  • the first interface 55 may receive a parameter in a kind of statistical amount "power spectrum" (frequency: 100 Hz to 200 Hz).
  • the processor 52 manages the management information 41 (for example information including the statistical amount parameter management table 410) including one or more parameters input via the first interface with regard to each of the one or more kinds of statistical amounts. Thus, it is possible to manage one or more parameters for each kind of statistical amount.
  • the storage system 50 may include, for example, the memory 54 (for example, the RAM 104) and the memory 54 may store the management information 41.
  • the processor 52 calculates a statistical amount value of each of the one or more kinds of statistical amounts based on one or more parameters input via the first interface 55 in relation to the original data 16.
  • the processor 52 calculates a statistical amount value of each of the one or more kinds of statistical amounts based on the one or more parameters input via the first interface 55 in relation to the decompressed data 17 corresponding to the original data 16.
  • the machine learning of the compressor/decompressor 13 is performed based on the statistical amount value calculated for each of one or more kinds of statistical amounts in relation to the original data 16 and the statistical amount value calculated for each of one or more kinds of statistical amounts in relation to the decompressed data 17. In this way, it is possible to perform the machine learning of the compressor/decompressor 13 based on one or more parameters input via the first interface 55.
  • the machine learning of the compressor/decompressor 13 is performed based on the objective function 20 set in the compressor/decompressor 13.
  • the processor 52 sets one or more kinds values (errors) regarding the difference between the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the original data 16 and a statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the decompressed data 17 in the objective function 20 of the compressor/decompressor 13.
  • the objective function 20 there is a loss function. In general, a mean of an error between times is used in the loss function. However, there is no method of uniformizing the error between the times. Therefore, there is a problem that a variation occurs in the error for each time.
  • a term for improving a compression ratio is added to the loss function in addition to a term representing an error in some cases.
  • trade-off between a compression ratio and an error can be adjusted in accordance with a ratio of terms of the error to a term of the compression ratio, that is, a coefficient multiplied to the term of the compression ratio (hereinafter referred to as a compression coefficient).
  • a ratio of terms of the error to a term of the compression ratio that is, a coefficient multiplied to the term of the compression ratio (hereinafter referred to as a compression coefficient).
  • a relation among the compression coefficient, the error, and the compression ratio is generally not known. To adjust this relation, many compressors/decompressors in which the compression coefficient is changed have to be learned, and thus it takes some time. According to the storage system 50, this problem is solved. That is, the variation in the error between the times or the statistical amounts can be reduced.
  • an interface for example, a UI
  • receives a target value of an error designated by a user may be provided.
  • the loss function can be set or the compression coefficient can be automatically adjusted so that the compression ratio is optimized within a range in which the target value designated via the interface is satisfied.
  • One or more kinds of statistical amounts may include at least one of a mean and a dispersion.
  • one or more parameters may include the number of samples and a calculation frequency with regard to one kind of statistical amount. The parameters may be designated as parameters necessary to calculate the statistical amount.
  • the processor 102 may provide the second interface 56 which is an interface that displays statistical amount quality conforming to a difference between the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the original data 16 and the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the decompressed data 17 obtained by decompressing the post-compression data of the compressor/decompressor after the machine learning.
  • the user can know the statistical amount quality obtained from the data 17 compressed and decompressed by the compressor/decompressor 13 after the machine learning.
  • the statistical amount quality may be one or more kinds of values regarding the difference between the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the original data 16 and the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the decompressed data 17 of the post-compression data of the original data 16.
  • the kind of statistical amount may be designated via the first interface 55. Thus, it is possible to designate information necessary to calculate the statistical amount more accurately.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Operations Research (AREA)
  • Medical Informatics (AREA)
  • Testing And Monitoring For Control Systems (AREA)
  • Recording Measured Values (AREA)
  • Debugging And Monitoring (AREA)
EP20163179.3A 2019-06-12 2020-03-13 Storage system and storage control method Withdrawn EP3751743A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2019109585A JP7328799B2 (ja) 2019-06-12 2019-06-12 ストレージシステムおよび記憶制御方法

Publications (1)

Publication Number Publication Date
EP3751743A1 true EP3751743A1 (en) 2020-12-16

Family

ID=69845845

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20163179.3A Withdrawn EP3751743A1 (en) 2019-06-12 2020-03-13 Storage system and storage control method

Country Status (4)

Country Link
US (1) US11580196B2 (ja)
EP (1) EP3751743A1 (ja)
JP (1) JP7328799B2 (ja)
CN (1) CN112087234A (ja)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077609B (zh) * 2022-01-19 2022-04-22 北京四维纵横数据技术有限公司 数据存储及检索方法,装置,计算机可读存储介质及电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6529927B1 (en) 2000-03-31 2003-03-04 The Regents Of The University Of California Logarithmic compression methods for spectral data
JP2007199891A (ja) 2006-01-25 2007-08-09 Hitachi Ltd ストレージシステム及び記憶制御装置
US20190081637A1 (en) * 2017-09-08 2019-03-14 Nvidia Corporation Data inspection for compression/decompression configuration and data type determination

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5423325A (en) * 1993-03-12 1995-06-13 Hewlett-Packard Corporation Methods for enhancement of HRV and late potentials measurements
US6011868A (en) * 1997-04-04 2000-01-04 Hewlett-Packard Company Bitstream quality analyzer
JP2007323401A (ja) 2006-06-01 2007-12-13 Kagawa Univ データ処理装置,データ復元装置,データ処理方法及びデータ復元方法
US8306039B2 (en) * 2008-12-15 2012-11-06 Ciena Corporation Methods and systems for automatic transport path selection for multi-homed entities in stream control transmission protocol
JP6571027B2 (ja) 2016-03-07 2019-09-04 三菱電機インフォメーションネットワーク株式会社 データ格納装置及びデータ格納プログラム
JP6318211B2 (ja) 2016-10-03 2018-04-25 株式会社Preferred Networks データ圧縮装置、データ再現装置、データ圧縮方法、データ再現方法及びデータ転送方法
JP7017861B2 (ja) * 2017-03-23 2022-02-09 株式会社日立製作所 異常検知システムおよび異常検知方法
JP6898778B2 (ja) 2017-06-02 2021-07-07 株式会社日立製作所 機械学習システム及び機械学習方法
JP6691079B2 (ja) 2017-08-25 2020-04-28 日本電信電話株式会社 検知装置、検知方法および検知プログラム
US10679129B2 (en) * 2017-09-28 2020-06-09 D5Ai Llc Stochastic categorical autoencoder network
JP6826021B2 (ja) * 2017-11-20 2021-02-03 株式会社日立製作所 ストレージシステム
US11717686B2 (en) * 2017-12-04 2023-08-08 Neuroenhancement Lab, LLC Method and apparatus for neuroenhancement to facilitate learning and performance
US20190228110A1 (en) * 2018-01-19 2019-07-25 General Electric Company System and method for abstracting characteristics of cyber-physical systems
US11902369B2 (en) * 2018-02-09 2024-02-13 Preferred Networks, Inc. Autoencoder, data processing system, data processing method and non-transitory computer readable medium
CA3095109A1 (en) * 2018-03-23 2019-09-26 Memorial Sloan Kettering Cancer Center Deep encoder-decoder models for reconstructing biomedical images
EP3775821A1 (en) * 2018-04-11 2021-02-17 Dolby Laboratories Licensing Corporation Perceptually-based loss functions for audio encoding and decoding based on machine learning
WO2020070376A1 (en) * 2018-10-02 2020-04-09 Nokia Technologies Oy An apparatus, a method and a computer program for running a neural network
US11037278B2 (en) * 2019-01-23 2021-06-15 Inception Institute of Artificial Intelligence, Ltd. Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6529927B1 (en) 2000-03-31 2003-03-04 The Regents Of The University Of California Logarithmic compression methods for spectral data
JP2007199891A (ja) 2006-01-25 2007-08-09 Hitachi Ltd ストレージシステム及び記憶制御装置
US20190081637A1 (en) * 2017-09-08 2019-03-14 Nvidia Corporation Data inspection for compression/decompression configuration and data type determination

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ADITYA GROVER ET AL: "Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 26 December 2018 (2018-12-26), XP081200418 *
ALIREZA MAKHZANI ET AL: "k-sparse autoencoders", ARXIV:1312.5663V2 [CS.LG], 22 March 2014 (2014-03-22), XP055210670, Retrieved from the Internet <URL:http://arxiv.org/abs/1312.5663v2> [retrieved on 20150901] *
ANDREW NG: "Sparse autoencoder", CS294A/CS294W DEEP LEARNING AND UNSUPERVISED FEATURE LEARNING WINTER 2011 LECTURE NOTES, 31 December 2011 (2011-12-31), Stanford, CA, USA, XP055725071, Retrieved from the Internet <URL:https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf> [retrieved on 20200825] *
ANONYMOUS: "Autoencoder", WIKIPEDIA, 11 June 2019 (2019-06-11), XP055725083, Retrieved from the Internet <URL:https://en.wikipedia.org/w/index.php?title=Autoencoder&oldid=901360758> [retrieved on 20200825] *
CAGLAR AYTEKIN ET AL: "A Compression Objective and a Cycle Loss for Neural Image Compression", 24 May 2019 (2019-05-24), XP055724805, Retrieved from the Internet <URL:https://arxiv.org/pdf/1905.10371.pdf> [retrieved on 20200824] *

Also Published As

Publication number Publication date
JP7328799B2 (ja) 2023-08-17
CN112087234A (zh) 2020-12-15
JP2020201185A (ja) 2020-12-17
US11580196B2 (en) 2023-02-14
US20200394256A1 (en) 2020-12-17

Similar Documents

Publication Publication Date Title
CN106960219B (zh) 图片识别方法及装置、计算机设备及计算机可读介质
CN110347873B (zh) 视频分类方法、装置、电子设备及存储介质
CN107844837B (zh) 针对机器学习算法进行算法参数调优的方法及***
US20220147822A1 (en) Training method and apparatus for target detection model, device and storage medium
JP2022058915A (ja) 画像認識モデルをトレーニングするための方法および装置、画像を認識するための方法および装置、電子機器、記憶媒体、並びにコンピュータプログラム
CN113807440B (zh) 利用神经网络处理多模态数据的方法、设备和介质
JP2020522794A (ja) ニューラルネットワーク分類
US11177823B2 (en) Data compression by local entropy encoding
WO2017132010A1 (en) Machine learning through parallelized stochastic gradient descent
US20210327427A1 (en) Method and apparatus for testing response speed of on-board equipment, device and storage medium
CN113379627A (zh) 图像增强模型的训练方法和对图像进行增强的方法
CN111563593B (zh) 神经网络模型的训练方法和装置
US20230066021A1 (en) Object detection
CN113743607A (zh) 异常检测模型的训练方法、异常检测方法及装置
CN115063875A (zh) 模型训练方法、图像处理方法、装置和电子设备
KR20200089588A (ko) 전자 장치 및 이의 제어 방법
US20230177326A1 (en) Method and apparatus for compressing neural network model
WO2023020456A1 (zh) 网络模型的量化方法、装置、设备和存储介质
US20150012644A1 (en) Performance measurement method, storage medium, and performance measurement device
JP2023085353A (ja) 特徴抽出モデル訓練方法、画像分類方法および関連装置
EP3751743A1 (en) Storage system and storage control method
CN114494814A (zh) 基于注意力的模型训练方法、装置及电子设备
CN116013354B (zh) 深度学习模型的训练方法、控制虚拟形象口型变化的方法
CN116703659A (zh) 一种应用于工程咨询的数据处理方法、装置及电子设备
WO2021177394A1 (ja) データ処理システムおよびデータ圧縮方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200313

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220421

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20240208