CN109816042B - Data classification model training method and device, electronic equipment and storage medium - Google Patents

Data classification model training method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109816042B
CN109816042B CN201910105031.6A CN201910105031A CN109816042B CN 109816042 B CN109816042 B CN 109816042B CN 201910105031 A CN201910105031 A CN 201910105031A CN 109816042 B CN109816042 B CN 109816042B
Authority
CN
China
Prior art keywords
training
iteration
learning rate
data
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910105031.6A
Other languages
Chinese (zh)
Other versions
CN109816042A (en
Inventor
申世伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910105031.6A priority Critical patent/CN109816042B/en
Publication of CN109816042A publication Critical patent/CN109816042A/en
Application granted granted Critical
Publication of CN109816042B publication Critical patent/CN109816042B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to the field of deep learning, and in particular, to a method and an apparatus for training a data classification model, an electronic device, and a storage medium. The method comprises the following steps: acquiring a plurality of first sample data, a plurality of second sample data, a first iteration number and a second iteration number; within the first iteration times, training a first data classification model by using an annular training mode based on a plurality of first sample data to obtain a second data classification model; in the second iteration number, based on a plurality of second sample data, the tree-shaped training mode is used for training the second data classification model to obtain a third data classification model, and when data classification training is carried out, the tree-shaped training mode and the annular training mode are combined, so that compared with the mode of using the tree-shaped training mode alone, a large amount of time can be saved, meanwhile, the accuracy of the data classification model can be ensured, and the training efficiency of the data classification model is improved.

Description

Data classification model training method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of deep learning, and in particular, to a method and an apparatus for training a data classification model, an electronic device, and a storage medium.
Background
With the development of deep learning technology, people can effectively process machine classification problems such as voice recognition or image classification by using a deep learning method. When data classification is performed, a data classification model needs to be trained first, and data classification is performed based on the data classification model. In order to improve the accuracy of data classification model classification, a large amount of sample data is often required to be provided, and more sample data also means a heavy computational burden. In the field of deep learning training, compared with a Central Processing Unit (CPU), a Graphics Processing Unit (GPU) is very advantageous in terms of matrix parallelization calculation, and is more suitable for training a data classification model.
In the related art, when training a data classification model, a common training mode is a tree training mode, for example, a conventional multi-card distributed algorithm is adopted, sample data and a complete network structure of the model are sent to each GPU, a plurality of GPUs are used for model training, each training is finished by collecting each model parameter trained from other GPUs by one summary GPU, an average value of the model parameters is determined according to each model parameter obtained by training of other GPUs, then the average value of the model parameters is distributed to other GPUs, model training is performed until the iteration number reaches a preset total iteration number, and at this time, a data classification model is obtained by training.
The problem that the related art has is that in the process of training the model, in the preset total iteration times, each iteration needs to synchronize the model parameters of all the GPUs together for operation, and with the increase of the total number of the GPUs and/or the increase of the model parameters, the communication time of the GPUs for summarizing linearly increases, and the required data classification model cannot be trained in a short time, so that a large amount of time is occupied by training the data classification model, and the efficiency of training the data classification model is low.
Disclosure of Invention
The present disclosure provides a method and an apparatus for training a data classification model, an electronic device, and a storage medium, which can overcome the problem of low efficiency of training the data classification model due to a large amount of time occupied by training the data classification model.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for training a data classification model, including:
acquiring a plurality of first sample data, a plurality of second sample data, a first iteration number and a second iteration number, wherein the sum of the first iteration number and the second iteration number is the total iteration number of model training, the first sample data is used for training a first data classification model, and the second sample data is used for training a second data classification model;
within the first iteration times, based on the plurality of first sample data, training a first data classification model in an annular training mode to obtain a second data classification model;
and in the second iteration times, training the second data classification model by using a tree training mode based on the plurality of second sample data to obtain a third data classification model.
In a possible implementation manner, the training a first data classification model by using a circular training manner based on the plurality of first sample data within the first iteration number to obtain a second data classification model includes:
training the first data classification model through a plurality of first training machines and the plurality of first sample data to obtain a first model parameter obtained by training each first training machine;
transmitting the first model parameters obtained by each first training machine according to the annular connection sequence of each first training machine, so that each first training machine obtains the first model parameters obtained by other first training machines;
and for each first training machine, performing iterative training on the first data classification model according to the first model parameters obtained by training the first training machine, the first model parameters obtained by training other first training machines obtained by the first training machine, and the plurality of first sample data until the iteration times reach the first iteration times to obtain the second data classification model.
In another possible implementation manner, the training the first data classification model through the plurality of first training machines and the plurality of first sample data to obtain the first model parameter trained by each first training machine includes:
dividing the plurality of first sample data into a first number of sample data groups, each sample data group including at least one first sample data, the first number being the number of the plurality of first training machines;
for each first training machine of each iteration, selecting one sample data set from the first number of sample data sets that is not assigned to the first training machine;
and performing iterative training on the first data classification model through the first training machine and the sample data set to obtain the first model parameter.
In another possible implementation manner, the iteratively training the first data classification model according to the first model parameter obtained by training the first training machine, the first model parameter obtained by training the other first training machines obtained by the first training machine, and the plurality of first sample data until the number of iterations reaches the first number of iterations, to obtain the second data classification model, includes:
determining a first learning rate of the first data classification model corresponding to current iterative training, and determining a second learning rate of the first training machine;
in each iteration process, calculating the first model parameter of the first training machine, the first model parameters of other first training machines acquired by the first training machine and the plurality of first sample data according to the first learning rate and the second learning rate, updating a second data classification model according to the calculation result, and repeating the process of each iteration until the number of iterations reaches the first number of iterations to obtain the second data classification model.
In another possible implementation manner, the determining a first learning rate of the first data classification model corresponding to the training of the current iteration includes:
when the iteration number of the current iteration training is zero, taking an initial learning rate as a first learning rate of the current iteration training;
when the iteration number of the current iteration training is not zero and the current iteration number is within a third iteration number, obtaining a third learning rate of the previous iteration, and linearly increasing the third learning rate to obtain a first learning rate of the current iteration training, wherein the third iteration number is smaller than the first iteration number;
and when the iteration number of the current iteration training is within a fourth iteration number, obtaining a third learning rate of the previous iteration, and attenuating the third learning rate by using a polynomial attenuation strategy to obtain a first learning rate of the current iteration training, wherein the fourth iteration number is greater than the third iteration number and smaller than the first iteration number.
In another possible implementation, the determining the second learning rate of the first training machine includes:
determining a network layer where the first training machine is located, the weight of the network layer and the gradient of the network layer;
determining a second learning rate of the first training machine according to the network layer, the weight of the network layer, the gradient of the network layer and a first model parameter of the first training machine;
wherein the second learning rate is positively correlated with the weights of the network layer and the first model parameters of the first training machine, and the second learning rate is negatively correlated with the gradient of the network layer.
In another possible implementation manner, the training the second data classification model in the second iteration number by using a tree training mode based on the plurality of second sample data to obtain a third data classification model includes:
training the second data classification model through the plurality of second training machines and the plurality of second sample data to obtain second model parameters obtained by training each second training machine;
transmitting the second model parameters of the plurality of second training machines to a summarizing machine, and determining third model parameters through the summarizing machine based on the second model parameters obtained by training each second training machine;
sending the third model parameter to each second training machine through the summarizing machine;
and for each second training machine, performing iterative training on the second data classification model according to the third model parameter and the plurality of second sample data until the iteration times reach the second iteration times to obtain the third data classification model.
In another possible implementation manner, the iteratively training the second data classification model according to the third model parameter and the plurality of second sample data until the number of iterations reaches the second number of iterations to obtain the third data classification model includes:
determining a fourth learning rate of the second data classification model during current iterative training;
and in each iteration process, calculating the third model parameter and the plurality of second sample data according to the fourth learning rate, updating the second data classification model according to the calculation result, and repeating the process of each iteration until the iteration number reaches the second iteration number to obtain the third data classification model.
In another possible implementation manner, the determining a fourth learning rate of the second data classification model during the current iteration comprises:
acquiring a fifth learning rate of the second data classification model, wherein the fifth learning rate is a learning rate at the end of training the first data classification model by using an annular training mode;
when the number of iterations of the current iterative training is zero, determining a ratio of the fifth learning rate to a second number as the fourth learning rate, the second number being the number of the plurality of second training machines;
and when the iteration number of the current iterative training is not zero, obtaining a sixth learning rate of the previous iteration, and attenuating the sixth learning rate by using a polynomial attenuation strategy to obtain a fourth learning rate of the current iterative training.
In another possible implementation manner, the ring training manner is a training manner that trains by using a ring-reduce algorithm.
In another possible implementation manner, when data to be classified is classified, the data to be classified is input into the third data classification model, and a classification result of the data is obtained.
According to a second aspect of the embodiments of the present disclosure, there is provided a data sorting apparatus including:
the acquisition module is configured to acquire a plurality of first sample data, a plurality of second sample data, a first iteration number and a second iteration number, wherein the sum of the first iteration number and the second iteration number is the total iteration number of model training, the first sample data is used for training a first data classification model, and the second sample data is used for training a second data classification model;
the first training module is configured to train a first data classification model by using a circular training mode based on the plurality of first sample data within the first iteration number to obtain a second data classification model;
the second training module is configured to train the second data classification model in a tree training mode based on the plurality of second sample data within the second iteration number to obtain a third data classification model;
and the input module is configured to input the data to be classified into the third data classification model when the data to be classified is classified, so as to obtain a classification result of the data.
In a possible implementation manner, the first training module is further configured to train the first data classification model through a plurality of first training machines and the plurality of first sample data, so as to obtain a first model parameter trained by each first training machine;
transmitting the first model parameters obtained by each first training machine according to the annular connection sequence of each first training machine, so that each first training machine obtains the first model parameters obtained by other first training machines;
and for each first training machine, performing iterative training on the first data classification model according to the first model parameters obtained by training the first training machine, the first model parameters obtained by training other first training machines obtained by the first training machine, and the plurality of first sample data until the iteration times reach the first iteration times to obtain the second data classification model.
In another possible implementation manner, the first training module is further configured to divide the plurality of first sample data into a first number of sample data groups, each sample data group including at least one first sample data, where the first number is the number of the plurality of first training machines;
for each first training machine of each iteration, selecting one sample data set from the first number of sample data sets that is not assigned to the first training machine;
and performing iterative training on the first data classification model through the first training machine and the sample data set to obtain the first model parameter.
In another possible implementation manner, the first training module is further configured to determine a first learning rate of the first data classification model corresponding to the current iteration training, and determine a second learning rate of the first training machine;
in each iteration process, calculating the first model parameter of the first training machine, the first model parameters of other first training machines acquired by the first training machine and the plurality of first sample data according to the first learning rate and the second learning rate, updating a second data classification model according to the calculation result, and repeating the process of each iteration until the number of iterations reaches the first number of iterations to obtain the second data classification model.
In another possible implementation manner, the first training module is further configured to use an initial learning rate as the first learning rate of the current iterative training when the number of iterations of the current iterative training is zero;
when the iteration number of the current iteration training is not zero and the current iteration number is within a third iteration number, obtaining a third learning rate of the previous iteration, and linearly increasing the third learning rate to obtain a first learning rate of the current iteration training, wherein the third iteration number is smaller than the first iteration number;
and when the iteration number of the current iteration training is within a fourth iteration number, obtaining a third learning rate of the previous iteration, and attenuating the third learning rate by using a polynomial attenuation strategy to obtain a first learning rate of the current iteration training, wherein the fourth iteration number is greater than the third iteration number and smaller than the first iteration number.
In another possible implementation manner, the first training module is further configured to determine a network layer where the first training machine is located, a weight of the network layer, and a gradient of the network layer;
determining a second learning rate of the first training machine according to the network layer, the weight of the network layer, the gradient of the network layer and a first model parameter of the first training machine;
wherein the second learning rate is positively correlated with the weights of the network layer and the first model parameters of the first training machine, and the second learning rate is negatively correlated with the gradient of the network layer.
In another possible implementation manner, the second training module is further configured to train the second data classification model through the plurality of second training machines and the plurality of second sample data, so as to obtain a second model parameter trained by each second training machine;
transmitting the second model parameters of the plurality of second training machines to a summarizing machine, and determining third model parameters through the summarizing machine based on the second model parameters obtained by training each second training machine;
sending the third model parameter to each second training machine through the summarizing machine;
and for each second training machine, performing iterative training on the second data classification model according to the third model parameter and the plurality of second sample data until the iteration times reach the second iteration times to obtain the third data classification model.
In another possible implementation manner, the second training module is further configured to determine a fourth learning rate of the second data classification model when training in the current iteration;
and in each iteration process, calculating the third model parameter and the plurality of second sample data according to the fourth learning rate, updating the second data classification model according to the calculation result, and repeating the process of each iteration until the iteration number reaches the second iteration number to obtain the third data classification model.
In another possible implementation manner, the second training module is further configured to obtain a fifth learning rate of the second data classification model, where the fifth learning rate is a learning rate at the end of training the first data classification model by using a circular training manner;
when the number of iterations of the current iterative training is zero, determining a ratio of the fifth learning rate to a second number as the fourth learning rate, the second number being the number of the plurality of second training machines;
and when the iteration number of the current iterative training is not zero, obtaining a sixth learning rate of the previous iteration, and attenuating the sixth learning rate by using a polynomial attenuation strategy to obtain a fourth learning rate of the current iterative training.
In another possible implementation manner, the ring training manner is a training manner that trains by using a ring-reduce algorithm.
In another possible implementation manner, the apparatus further includes:
and the input module is configured to input the data to be classified into the third data classification model when the data to be classified is classified, so as to obtain a classification result of the data.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the method of data classification model training of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored thereon instructions that, when executed by a processor of an electronic device, implement the method for data classification model training of the first aspect described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
training a first data classification model by using an annular training mode based on a plurality of first sample data within a first iteration number to obtain a second data classification model; in the second iteration number, based on a plurality of second sample data, the tree-shaped training mode is used for training the second data classification model to obtain a third data classification model, when data classification is carried out, the annular training mode can save a large amount of time compared with the traditional training mode, meanwhile, the accuracy of the data classification model obtained through training is guaranteed by combining the tree-shaped training mode, and the training efficiency of the data classification model is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of data classification model training in accordance with an exemplary embodiment.
FIG. 2 is a flow diagram illustrating another method of data classification model training in accordance with an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating a circular training pattern in accordance with an exemplary embodiment.
FIG. 4 is a schematic diagram illustrating another circular training approach in accordance with an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating another circular training approach in accordance with an exemplary embodiment.
FIG. 6 is a schematic diagram illustrating another circular training approach in accordance with an exemplary embodiment.
FIG. 7 is a schematic diagram illustrating another circular training approach in accordance with an exemplary embodiment.
FIG. 8 is a schematic diagram illustrating another circular training approach in accordance with an exemplary embodiment.
FIG. 9 is a schematic diagram illustrating another circular training approach in accordance with an exemplary embodiment.
FIG. 10 is a schematic diagram illustrating a tree training approach in accordance with an exemplary embodiment.
FIG. 11 is a block diagram illustrating a data classification model training apparatus according to an exemplary embodiment.
FIG. 12 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flowchart illustrating a method of data classification model training, as shown in FIG. 1, for use in an electronic device, according to an example embodiment, including the following steps.
In step S101, the electronic device obtains a plurality of first sample data, a plurality of second sample data, a first iteration count and a second iteration count, where a sum of the first iteration count and the second iteration count is a total iteration count of model training.
In step S102, within a first iteration count, the electronic device trains the first data classification model in an annular training mode based on the plurality of first sample data, so as to obtain a second data classification model.
In step S103, within a second iteration count, the electronic device trains the second data classification model in a tree training mode based on a plurality of second sample data, so as to obtain a third data classification model.
In the embodiment of the disclosure, a second data classification model is obtained by training a first data classification model in a first iteration number based on a plurality of first sample data in an annular training mode; in the second iteration number, based on a plurality of second sample data, the tree-shaped training mode is used for training the second data classification model to obtain a third data classification model, when data classification is carried out, the annular training mode can save a large amount of time compared with the traditional training mode, meanwhile, the accuracy of the data classification model obtained through training is guaranteed by combining the tree-shaped training mode, and the training efficiency of the data classification model is improved.
FIG. 2 is a flow chart illustrating another method of data classification model training, as shown in FIG. 2, for use in an electronic device, according to an example embodiment, including the following steps.
In step S201, the electronic device acquires a plurality of first sample data and a plurality of second sample data.
The first sample data is used for training a first data classification model, and the second sample data is used for training a second data classification model. The first sample data may be an image, text information, or a voice signal, and the second sample data may be an image, text information, or a voice signal. The selection of the plurality of second sample data may be the same as the plurality of first sample data, that is, the same data is used as the first sample data and the second sample data; the selection of the plurality of second sample data may also be different from the plurality of first sample data, that is, different data is used as the first sample data and the second sample data respectively; the number of the second sample data may also be the same as the number of the first sample data, that is, the number of the first sample data and the number of the second sample data have the same part and different parts. The number of the first sample data and the number of the second sample data may be the same or different.
The electronic device may be a portable mobile electronic device, such as: a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like, or other electronic devices that may accomplish the disclosure. The electronic device may also be referred to by other names as user equipment, portable electronic device, laptop electronic device, desktop electronic device, and the like.
In step S202, the electronic device determines a first iteration number and a second iteration number.
In the embodiment of the disclosure, the first data classification model is trained in an annular training mode to obtain a second data classification model, and then the second data classification model is trained in a tree-shaped training mode to obtain a third data classification model. The first iteration times are training times adopting an annular training mode, and the second iteration times are training times adopting a tree-shaped training mode. And the sum of the first iteration times and the second iteration times is the total iteration times of the model training. The first iteration number and the second iteration number may be the same or different.
When the training of the data classification model is carried out on the data with the same order of magnitude, the training speed of the annular training mode is high, and the training precision of the tree-shaped training mode is high. The training time is shortened by using an annular training mode, and the training precision is ensured by using a tree-shaped training mode. Therefore, the electronic device may determine the first iteration number and the second iteration number according to the training requirements of the data classification model and the total iteration number. In a possible implementation manner, when the training requirement is that the precision requirement is higher than the time requirement, the electronic device sets the first iteration number to be greater than the second iteration number; when the training requirement is that the time requirement is higher than the precision requirement, the electronic equipment sets the first iteration number to be smaller than the second iteration number.
In another possible implementation manner, the electronic device may further determine a required accuracy of the data classification model, determine a proportional relationship between the first iteration number and the second iteration number according to the required accuracy, and determine the first iteration number and the second iteration number according to the total iteration number and the proportional relationship.
For example, the total number of iterations is 100, the first number of iterations may be 80, and the second number of iterations may be 20.
In step S203, within a first iteration number, the electronic device trains the first data classification model in an annular training mode based on the plurality of first sample data, so as to obtain a second data classification model.
Wherein, the ring training mode can be a training mode for training by using a ring-reduce algorithm. The first data classification model may be an initial data classification model or a data classification model in a training process. This step can be realized by the following steps (1) to (3), including:
(1) the electronic equipment trains the first data classification model through the plurality of first training machines and the plurality of first sample data to obtain first model parameters obtained by training each first training machine.
The first training machine is a device comprising a processor; for example, the first training machine may be a device that includes a CPU or a device that includes a GPU. The first number of the plurality of first training machines may be set and changed as needed, and in the embodiment of the present disclosure, the first number is not specifically limited; for example, the first number may be 5 or 8, etc.
In one possible implementation manner, for each first training machine, the electronic device deploys the first data classification model to the first training machine, inputs a plurality of first sample data into the first data classification model in the first training machine, and trains the first data classification model through the first training machine based on the plurality of first sample data to obtain the first model parameters.
In another possible implementation, the electronic device may group a plurality of first sample data, and assign a set of first sample data to each first training machine. Accordingly, this step can be realized by the following steps (1-1) to (1-3), including:
(1-1) the electronic device divides the plurality of first sample data into a first number of sample data groups, each sample data group including at least one first sample data.
In this step, the electronic device may uniformly divide the plurality of first sample data into a first number of sample data groups, or may unevenly divide the plurality of first sample data into the first number of sample data groups. Accordingly, the number of the first sample data included in each sample data group may be the same or different. And after the electronic equipment divides the plurality of first sample data into a first number of sample data groups, determining a data group identifier of each sample data group, and associating the stored data group identifier with the sample data groups. Wherein the data group identification may be a number of the data group.
For example, if the first number is M, the electronic device uniformly divides the plurality of first sample DATA into M sample DATA groups, each sample DATA group including at least one first sample DATA, each sample DATA group having DATA of the same size, each sample DATA group having a unique DATA group identification, which may be DATA0, DATA1, DATA2 … DATA (M-1).
(1-2) for each first training machine of each iteration, the electronic device selects a sample data set from the first number of sample data sets that is not assigned to the first training machine.
The electronic equipment allocates one sample data set to each first training machine, and each first training machine corresponds to different sample data sets. During the first iterative training, the electronic device randomly allocates a first number of sample data groups to a first number of first training machines, and establishes a mapping relationship between the data group identifiers of the sample data groups and the machine identifiers of the first training machines. The machine identifier may be an SN (Serial Number) Number or an IP (Internet Protocol Address) Address of the first training machine. During the second iterative training, for each first training machine, the electronic device determines, according to the established mapping relationship, a data group identifier of a sample data group not allocated to the first training machine, and allocates, according to the data group identifier of the sample data group not allocated to the first training machine, a sample data group corresponding to the data group identifier in the first number of sample data groups to the first training machine.
For example, when M is 3, the first training machine is GPU, then 3 first training machines are GPU0, GPU1 and GPU2, respectively, and 3 sample DATA groups are DATA0, DATA1 and DATA2, respectively. At the first iterative training, the electronic device assigns DATA0 to GPU0, DATA1 to GPU1, and DATA2 to GPU 2; on the second iterative training, the electronic device assigns DATA1 to GPU0, DATA2 to GPU1, and DATA0 to GPU 2; on the third iteration of training, the electronic device assigns DATA2 to GPU0, DATA0 to GPU1, and DATA1 to GPU 2.
And (1-3) the electronic equipment conducts iterative training on the first data classification model through the first training machine and the sample data set to obtain first model parameters.
In a possible implementation manner, the electronic device inputs the sample data set to a first training machine, and performs iterative training on the first data classification model based on the sample data set through the first training machine during each iterative training to obtain a first model parameter.
In another possible implementation, the electronic device divides the set of sample data into sample data chunks, and only one sample data chunk is used in each iterative training. Correspondingly, the steps can be as follows:
for each first training machine, the electronic device uniformly divides the sample data set distributed on the first training machine into a first number of sample data blocks. And during each iterative training, the electronic equipment selects an unselected sample data block, and the electronic equipment performs iterative training on the first data classification model through the first training machine based on the sample data block to obtain a first model parameter.
For example, the sample data group on each GPU is uniformly divided into M data chunks. Referring to fig. 3, M is 5 for illustration, that is, there are 5 GPUs, and 5 data blocks of GPU0 are a0,b0,c0,d0,e0(ii) a The 5 data blocks of GPU1 are a respectively1,b1,c1,d1,e1(ii) a The 5 data blocks of GPU2 are a respectively2,b2,c2,d2,e2(ii) a The 5 data blocks of GPU3 are a respectively3,b3,c3,d3,e3(ii) a 5 data blocks of GPU4Is a4,b4,c4,d4,e4
(2) The electronic equipment transmits the first model parameters obtained by each first training machine according to the annular connection sequence of each first training machine, so that each first training machine obtains the first model parameters obtained by other first training machines.
For each first training machine, the electronic device determines a previous first training machine and a next first training machine of the first training machine according to the annular connection sequence of each first training machine, receives a first model parameter obtained by training of the previous first training machine sent by the previous first training machine, sends the first model parameter obtained by training of the first training machine to the next first training machine, and so on until each first training machine obtains the first model parameter obtained by each other first training machine. As shown in fig. 4.
(3) For each first training machine, the electronic device conducts iterative training on the first data classification model according to first model parameters obtained through training of the first training machine, first model parameters obtained through training of other first training machines obtained through the first training machine and a plurality of first sample data until the iteration times reach the first iteration times, and a second data classification model is obtained.
This step can be realized by the following steps (3-1) to (3-3), including:
(3-1) the electronic device determining a first learning rate of a first data classification model corresponding to current iterative training;
when the electronic device trains the first data classification model, the gradient descent method is adopted to solve the model parameters of the first data classification model, in order to enable the gradient descent method to have better performance, the learning rate needs to be controlled within a proper range, the first learning rate is a dynamic learning rate, corresponding adjustment is carried out according to different iteration times, and the stability of the training process is guaranteed. The process of determining the first learning rate is as follows:
and when the iteration number of the current iterative training is zero, taking the initial learning rate as the first learning rate of the current iterative training. The initial learning rate is a preset learning rate before training of the first data classification model is started. The smaller the learning rate setting, the more stable the model calculation, and the longer the corresponding time consumption, for example, the initial learning rate may be set to be between 0.01 and 0.08, and in the embodiment of the present disclosure, the initial learning rate may be set to be 0.01 in order to ensure the stability of the training.
And when the iteration times of the current iteration training are not zero and the current iteration times are within the third iteration times, obtaining a third learning rate of the previous iteration, and linearly increasing the third learning rate to obtain a first learning rate of the current iteration training, wherein the third iteration times are smaller than the first iteration times.
After iteration starts, the electronic equipment determines a first learning rate of a first data classification model corresponding to current iteration training, when the current iteration times is within a third iteration time, a third learning rate of the previous iteration is obtained, the third learning rate is linearly increased, and the first learning rate of the current iteration training is obtained, wherein the third iteration times is smaller than the first iteration times; the third learning rate is a learning rate at the time of the last iteration, the first learning rate is a current iteration learning rate, for example, after the first iteration is completed, the learning rate after the first iteration is completed is the third learning rate, and the third learning rate is linearly increased to be used as a second iteration learning rate; the above linearly increasing the third learning rate means that the learning rate of the next iteration is gradually increased in each iteration within the third iteration, after the third iteration, the learning rate is increased to the product of the initial learning rate and the first quantity, so as to achieve the purpose of dynamically adjusting the learning rate, for example, when the third iteration is 5 times, the initial learning rate is 0.01, and the number of GPUs is 5, the learning rate is increased to 0.05 after the 5 iterations are completed, after the first iteration is completed, the third learning rate is 0.01, the third learning rate is linearly increased to obtain the first learning rate of the second iteration of 0.02, after the second iteration is completed, the third learning rate is 0.02, the third learning rate is linearly increased to obtain the first learning rate of the third iteration of 0.03, and so on, until the fifth iteration is completed, the third learning rate is 0.05.
And when the iteration number of the current iteration training is within a fourth iteration number, obtaining a third learning rate of the previous iteration, and attenuating the third learning rate by using a polynomial attenuation strategy to obtain a first learning rate of the current iteration training, wherein the fourth iteration number is greater than the third iteration number and smaller than the first iteration number. And after the third iteration is completed, continuing the iteration, within a fourth iteration time, obtaining a third learning rate when the last iteration is completed by the electronic equipment, and attenuating the third learning rate by using a polynomial attenuation strategy to obtain the first learning rate of the current iteration training, wherein the fourth iteration time is greater than the third iteration time and less than the first iteration time. The polynomial decay strategy is to decay the third learning rate to a preset learning rate by a preset decay step number.
It should be noted that the sum of the third iteration number and the fourth iteration number is the first iteration number.
(3-2) the electronic device determining a second learning rate of the first training machine corresponding to the current iterative training;
the second learning rate of each first training machine is the learning rate of the first training machine during operation, different first training machines are located in different network layers, different network layers have different weights, and different network layers also have different gradients. The second learning rate of the network layer where the first training machine is located is positively correlated with the weight of the network layer, the second learning rate of the network layer where the first training machine is located is positively correlated with the first model parameter of the first training machine, and the second learning rate of the network layer where the first training machine is located is negatively correlated with the gradient of the network layer. Correspondingly, the steps can be as follows: the electronic equipment determines a network layer where the first training machine is located, the weight of the network layer and the gradient of the network layer; and determining a second learning rate of the first training machine according to the network layer, the weight of the network layer, the gradient of the network layer and the first model parameter of the first training machine.
In this step, the electronic device may determine the second learning rate through any algorithm that is positively correlated with the first model parameter of the first training machine and the weight of the network layer and negatively correlated with the gradient of the network layer, in this disclosed embodiment, the algorithm is not specifically limited; for example, the electronic device may determine the second learning rate of each first training machine according to the network layer, the weight of the network layer, the gradient of the network layer, and the first model parameter of the first training machine by formula one.
The formula I is as follows:
Figure BDA0001966519220000141
where l denotes the network layer in which the first training machine is located, λ1Representing the second learning rate of the first training machine, η representing the weight of the network layer in which the first training machine is located, w1Representing first model parameters of a first training machine,
Figure BDA0001966519220000142
representing the gradient of the network layer where the first training machine is located.
It should be noted that the second learning rate is positively correlated with the weight of the network layer and the first model parameter of the first training machine, and the second learning rate is negatively correlated with the gradient of the network layer.
And (3-3) in each iteration process, the electronic equipment calculates the first model parameter of the first training machine, the first model parameters of other first training machines acquired by the first training machine and the plurality of first sample data according to the first learning rate and the second learning rate, updates the second data classification model according to the calculation result, and repeats the process of each iteration until the iteration times reach the first iteration times to obtain the second data classification model.
And in an iteration process, each first training machine sequentially operates the data of each data block according to the sequence until the first number of data blocks are all calculated.
For each first training machine, in the process of one iteration, data exchange is carried out between every two first training machines, after the first operation is finished, the first data exchange is carried out, each first training machine sends the operation result of the data block operated this time, namely the first model parameter, to the next first training machine of the first training machine, and meanwhile receives the first model parameter sent by the last training machine. The current first model parameters are updated. When the first model parameter of the network layer where the first training machine is located is updated, the formula two may be used for calculation.
The formula II is as follows:
Figure BDA0001966519220000143
wherein l represents the network layer where the first training machine is located, t represents the update time, γ represents the global learning rate, and λ1Representing a second learning rate of the first training machine,
Figure BDA0001966519220000144
representing the gradient of the network layer where the first training machine is located,
Figure BDA0001966519220000145
an updated value representing a parameter of the current network.
And when the second operation is carried out, each first training machine carries out operation based on the received first model parameters and the data blocks corresponding to the model parameters to obtain the first model parameters, then carries out the second data exchange, and repeats the process until the first quantity is reduced once. Each first training machine then synchronizes the first model parameters for each data block with the other first training machines in a direction opposite to the direction of data exchange until all first training machines have collected the first model parameters for all data blocks. And repeating the steps in the one-time iteration process for a plurality of times until the iteration times reach the first iteration times, and obtaining a second data classification model at the moment.
For example, referring to fig. 5 to 9, the first number is 5 for example. In fig. 5, there are 5 GPUs, and 5 data blocks of GPU0 are a0,b0,c0,d0,e0(ii) a The 5 data blocks of GPU1 are a respectively1,b1,c1,d1,e1(ii) a The 5 data blocks of GPU2 are a respectively2,b2,c2,d2,e2(ii) a The 5 data blocks of GPU3 are a respectively3,b3,c3,d3,e3(ii) a The 5 data blocks of GPU4 are a respectively4,b4,c4,d4,e4. For the first training, GPU0 is paired with a0The data block is operated on, GPU1 is b1The data block is operated on, GPU2 is c2The data block is operated on, GPU3 is d3The data block is operated on, GPU4 is e4The data block is operated on. For the first data exchange, GPU0 will be a0The data block and the first model parameter of the GPU0 are sent to the GPU1, and the e sent by the GPU4 is received4Data blocks and first model parameters of GPU4, and so on, GPU1 receives data sent by GPU0 while sending data to GPU2 until each GPU completes the data exchange. For the second training, see FIG. 6, GPU0 is paired with e0+e4The data blocks are operated on, and the GPU1 carries out a0+a1The data block is operated on, GPU2 is b1+b2The data block is operated on, GPU3 is c2+c3The data block is operated on, GPU4 is d3+d4The data block is operated on. For the second data exchange, GPU0 will e0+e4The data block and the first model parameter of the GPU0 are sent to the GPU1, and d sent by the GPU4 is received3+d4Data blocks and first model parameters of GPU4, and so on, GPU1 receives data sent by GPU0 while sending data to GPU2 until each GPU completes the data exchange. For the third training, see FIG. 7, GPU0 pairs d0+d3+d4The data block is operated on, GPU1 is e0+e1+e4The data blocks are operated on, and the GPU2 carries out a0+a1+a2The data block is operated on, GPU3 is b1+b2+b3The data block is operated on, GPU4 is c2+c3+c4The data block is operated on. For the second data exchange, GPU0 will send d0+d3+d4The data block and the first model parameter of the GPU0 are sent to the GPU1, and c sent by the GPU4 is received2+c3+c4Data blocks and first model parameters of GPU4, and so on, GPU1 receives data sent by GPU0 while sending data to GPU2 until each GPU completes the data exchange. For the fourth training, see FIG. 8, GPU0 for c0+c2+c3+c4The data block is operated on, GPU1 is d0+d1+d3+d4The data block is operated on, GPU2 is e0+e1+e2+e4The data blocks are operated on, and the GPU3 carries out a0+a1+a2+a3The data block is operated on, GPU4 is b1+b2+b3+b4The data block is operated on. For the second data exchange, GPU0 will c0+c2+c3+c4The data block and the first model parameter of the GPU0 are sent to the GPU1, and the b sent by the GPU4 is received1+b2+b3+b4Data blocks and first model parameters of GPU4, and so on, GPU1 receives data sent by GPU0 while sending data to GPU2 until each GPU completes the data exchange. When the fourth data exchange is finished, one data block in each GPU collects corresponding data of other GPUs, for example, b is collected in GPU00+b1+b2+b3+b4In GPU1, c is collected0+c1+c2+c3+c4At this point, the first part of an iterative process, the scatter-reduce process, is complete. The data blocks are then synchronized in the reverse direction of the data exchange. The result of the synchronization is shown in fig. 9, where an iterative process is completed.
In step S204, within the second iteration count, the electronic device trains the second data classification model in a tree training mode based on a plurality of second sample data, so as to obtain a third data classification model.
This step can be realized by the steps (1) to (4).
(1) And the electronic equipment trains the second data classification model through a plurality of second training machines and a plurality of second sample data to obtain a second model parameter obtained by training each second training machine.
The second training machine is a device comprising a processor; for example, the first training machine may be a device that includes a CPU or a device that includes a GPU. The second number of the plurality of second training machines may be set and changed as needed, and in the embodiment of the present disclosure, the second number is not particularly limited; for example, the second number may be 5 or 8, etc.
It should be noted that the first training machine described above may be used as the second training machine, or a training machine separate from the first training machine may be used as the second training machine. The second number of second training machines may be the same as or different from the first number of first training machines.
In a possible implementation manner, the electronic device may deploy the second data classification model to each second training machine, distribute a plurality of second sample data to each second training machine, and each training machine trains the second data classification model based on the received second sample data to obtain the second model parameters.
In another possible implementation manner, the electronic device may deploy a second data classification model to each second training machine, group a plurality of second sample data, and allocate a group of second sample data to each second training machine, where this step may be implemented by the following steps (1-1) to (1-3), including: .
(1-1) the electronic device dividing the plurality of second sample data into a second number of sample data groups, each sample data group including at least one second sample data.
In this step, the electronic device may uniformly divide the plurality of second sample data into the second number of sample data groups, or may non-uniformly divide the plurality of second sample data into the second number of sample data groups. Accordingly, the number of second sample data included in each sample data group may be the same or different. And after the electronic equipment divides the plurality of second sample data into a second number of sample data groups, determining a data group identifier of each sample data group, and associating the stored data group identifier with the sample data groups. Wherein the data group identification may be a number of the data group.
For example, if the second number is N, the electronic device divides the plurality of second sample DATA into N sample DATA groups, each sample DATA group including at least one second sample DATA, each sample DATA group having DATA of the same size, each sample DATA group having a unique DATA group identification, which may be DATA0, DATA1, DATA2 … DATA (N-1).
(1-2) for each second training machine of each iteration, the electronic device selects a sample data set from the second number of sample data sets that is not assigned to the second training machine.
The electronic equipment allocates one sample data set to each second training machine, and each second training machine corresponds to a different sample data set. During the first iterative training, the electronic device randomly allocates a second number of sample data sets to a second number of second training machines, and establishes a mapping relationship between the data set identifiers of the sample data sets and the machine identifiers of the second training machines. The machine identifier may be an SN (Serial Number) Number or an IP (Internet Protocol Address) Address of the second training machine. And during the second iterative training, for each second training machine, the electronic equipment determines the data group identifier of the sample data group which is not allocated to the second training machine according to the established mapping relation, and allocates the sample data group corresponding to the data group identifier in the second number of sample data groups to the second training machine according to the data group identifier of the sample data group which is not allocated to the second training machine.
For example, if N is 3, the second training machine is GPU, then 3 second training machines are GPU0, GPU1 and GPU2, respectively, and 3 sample DATA groups are DATA0, DATA1 and DATA2, respectively. At the first iterative training, the electronic device assigns DATA0 to GPU0, DATA1 to GPU1, and DATA2 to GPU 2; on the second iterative training, the electronic device assigns DATA1 to GPU0, DATA2 to GPU1, and DATA0 to GPU 2; on the third iteration of training, the electronic device assigns DATA2 to GPU0, DATA0 to GPU1, and DATA1 to GPU 2.
And (1-3) the electronic equipment conducts iterative training on the second data classification model through the second training machine and the sample data set to obtain second model parameters.
And the electronic equipment inputs the sample data set into a second training machine, and when iterative training is performed each time, iterative training is performed on a second data classification model based on the sample data set through the second training machine to obtain a second model parameter.
(2) The electronic equipment transmits the second model parameters of the plurality of second training machines to the summarizing machine, and the summarizing machine determines third model parameters based on the second model parameters obtained by training each second training machine.
The aggregation machine may be the same type of machine as the second training machine or a different type of machine. The electronic equipment transmits the second model parameters of the plurality of second training machines to the summarizing machine, and one possible implementation mode is that the electronic equipment transmits all the second model parameters to the summarizing machine after all the second training machines finish calculation; another possible implementation manner is that when the operation of the second training machine is finished, the obtained second model parameters of the second training machine are transmitted to the summarizing machine until all the second model parameters obtained by the second training machine are transmitted to the summarizing machine. As shown in fig. 10.
And after receiving all the second model parameters, the summarizing machine performs average value calculation on the second model parameters to obtain third model parameters, and updates the second data classification model.
(3) And the electronic equipment issues the third model parameters to each second training machine through the summarizing machine.
And the electronic equipment respectively issues the third model parameters obtained by the third gathering machine to each second training machine, and simultaneously issues the updated second data classification model to each second training machine.
(4) And for each second training machine, performing iterative training on the second data classification model according to the third model parameter and a plurality of second sample data until the iteration times reach the second iteration times to obtain a third data classification model.
This step can be realized by the following steps (4-1) to (4-2), including:
(4-1) determining a fourth learning rate of the second data classification model when the current iteration is trained;
the electronic equipment acquires a fifth learning rate of the second data classification model, wherein the fifth learning rate is the learning rate at the end of training the first data classification model by using a circular training mode.
When the iteration number of the current iteration training is zero, the electronic equipment determines the ratio of the fifth learning rate to the second number as a fourth learning rate;
when the iteration number of the current iterative training is not zero, namely after the iteration starts, the electronic equipment obtains the sixth learning rate of the previous iteration, and the polynomial attenuation strategy is used for attenuating the sixth learning rate to obtain the fourth learning rate of the current iterative training. The sixth learning rate is a learning rate at the time of completion of the previous iteration, and for example, after the first iteration is completed, the learning rate at the time of completion of the first iteration is a sixth learning rate, and a learning rate obtained by attenuating the sixth learning rate is set as a fourth learning rate for performing the second iteration.
And (4-2) in each iteration process, the electronic equipment calculates the third model parameters and the plurality of second sample data according to the fourth learning rate, updates the second data classification model according to the calculation result, and repeats each iteration process until the iteration times reach the second iteration times to obtain the third data classification model.
After the electronic equipment distributes the plurality of second sample data to each second training machine, each second training machine calculates the distributed sample data based on the fourth learning rate, each second training machine obtains second model parameters, and the second model parameters are transmitted to the summarizing machine. And the second trainer transmits the second model parameters to the summarizing machine, the summarizing machine obtains third model parameters as a process of one iteration, and the second training machine trains the third model parameters and the distributed data as a process of the next iteration until the iteration of the second iteration is completed to obtain a third data classification model.
It should be noted that steps S201 to S204 are a training process of the data classification model, and the data classification model only needs to be trained once. After the training of the data classification model is completed, the data classification model can be used for data classification through step S205 without performing the training of the data classification model again.
In step S205, when the data to be classified is classified, the electronic device inputs the data to be classified into a third data classification model, so as to obtain a classification result of the data.
When data classification is needed, inputting the data to be classified into a third data classification model obtained by training, and obtaining a data classification result. The data to be classified may be an image, an audio signal, or the like. When the data to be classified is an image, the third data classification model may be a classification model for identifying ages from face images, or may be an image for determining similar points in a plurality of images, or may be an image which is screened out to be in accordance with a preset type. When the data to be classified is an audio signal, the third data classification model may identify an age based on the audio signal, or may select an audio signal that meets a preset condition based on the audio signal. The classification result may be a category to which the data to be classified belongs, or may be data that meets a preset condition and is screened out from the data to be classified.
In the embodiment of the disclosure, a second data classification model is obtained by training a first data classification model in a first iteration number based on a plurality of first sample data in an annular training mode; in the second iteration number, based on a plurality of second sample data, the tree-shaped training mode is used for training the second data classification model to obtain a third data classification model, when data classification is carried out, the annular training mode can save a large amount of time compared with the traditional training mode, meanwhile, the accuracy of the data classification model obtained through training is guaranteed by combining the tree-shaped training mode, and the training efficiency of the data classification model is improved.
FIG. 11 is a block diagram illustrating a data classification model training apparatus according to an exemplary embodiment. Referring to fig. 11, the apparatus includes an acquisition module 1101, a first training module 1102 and a second training module 1103.
The obtaining module 1101 is configured to obtain a plurality of first sample data, a plurality of second sample data, and a first iteration number and a second iteration number, where a sum of the first iteration number and the second iteration number is a total iteration number of model training.
The first training module 1102 is configured to train the first data classification model based on the plurality of first sample data in a first iteration number, and obtain a second data classification model.
The second training module 1103 is configured to train the second data classification model in a tree training manner based on a plurality of second sample data within a second iteration number, so as to obtain a third data classification model.
In a possible implementation manner, the first training module 1102 is further configured to train the first data classification model through a plurality of first training machines and a plurality of first sample data, to obtain a first model parameter trained by each first training machine, where the first sample data is used for training the first data classification model, and the second sample data is used for training the second data classification model;
transmitting the first model parameters obtained by each first training machine according to the annular connection sequence of each first training machine, so that each first training machine obtains the first model parameters obtained by other first training machines;
and for each first training machine, performing iterative training on the first data classification model according to the first model parameters obtained by training the first training machine, the first model parameters obtained by training other first training machines obtained by the first training machine and the plurality of first sample data until the iteration times reach the first iteration times to obtain a second data classification model.
In another possible implementation manner, the first training module 1102 is further configured to divide the plurality of first sample data into a first number of sample data groups, where each sample data group includes at least one first sample data, and the first number is the number of the plurality of first training machines;
for each first training machine of each iteration, selecting a sample data set not assigned to the first training machine from the first number of sample data sets;
and performing iterative training on the first data classification model through the first training machine and the sample data set to obtain a first model parameter.
In another possible implementation, the first training module 1102 is further configured to determine a first learning rate of a first data classification model corresponding to the current iteration training, and determine a second learning rate of the first training machine;
in each iteration process, the first model parameter of the first training machine, the first model parameters of other first training machines acquired by the first training machine and the plurality of first sample data are calculated according to the first learning rate and the second learning rate, the second data classification model is updated according to the calculation result, and the iteration process is repeated each time until the iteration times reach the first iteration times, so that the second data classification model is obtained.
In another possible implementation manner, the first training module 1102 is further configured to, when the number of iterations of the current iterative training is zero, take the initial learning rate as the first learning rate of the current iterative training;
when the iteration times of the current iteration training are not zero and the current iteration times are within the third iteration times, obtaining a third learning rate of the previous iteration, and linearly increasing the third learning rate to obtain a first learning rate of the current iteration training, wherein the third iteration times are smaller than the first iteration times;
and when the iteration number of the current iteration training is within a fourth iteration number, obtaining a third learning rate of the previous iteration, and attenuating the third learning rate by using a polynomial attenuation strategy to obtain a first learning rate of the current iteration training, wherein the fourth iteration number is greater than the third iteration number and smaller than the first iteration number.
In another possible implementation manner, the first training module 1102 is further configured to determine a network layer where the first training machine is located, weights of the network layer, and gradients of the network layer;
determining a second learning rate of the first training machine according to the network layer, the weight of the network layer, the gradient of the network layer and the first model parameter of the first training machine;
the second learning rate is positively correlated with the weight of the network layer and the first model parameter of the first training machine, and the second learning rate is negatively correlated with the gradient of the network layer.
In another possible implementation manner, the second training module 1103 is further configured to train the second data classification model through a plurality of second training machines and a plurality of second sample data, so as to obtain a second model parameter trained by each second training machine;
transmitting the second model parameters of the plurality of second training machines to a summarizing machine, and determining third model parameters based on the second model parameters obtained by training of each second training machine through the summarizing machine;
the third model parameters are sent to each second training machine through the summarizing machine;
and for each second training machine, performing iterative training on the second data classification model according to the third model parameter and a plurality of second sample data until the iteration times reach the second iteration times to obtain a third data classification model.
In another possible implementation, the second training module 1103 is further configured to determine a fourth learning rate of the second data classification model when training in the current iteration;
and in each iteration process, calculating the third model parameters and a plurality of second sample data according to the fourth learning rate, updating the second data classification model according to the calculation result, and repeating the iteration process each time until the iteration times reach the second iteration times to obtain a third data classification model.
In another possible implementation manner, the second training module 1103 is further configured to obtain a fifth learning rate of the second data classification model, where the fifth learning rate is a learning rate at the end of training the first data classification model by using a circular training manner;
when the iteration number of the current iterative training is zero, determining the ratio of the fifth learning rate to a second number as a fourth learning rate, wherein the second number is the number of the plurality of second training machines;
and when the iteration number of the current iterative training is not zero, obtaining the sixth learning rate of the last iteration, and attenuating the sixth learning rate by using a polynomial attenuation strategy to obtain the fourth learning rate of the current iterative training.
In another possible implementation manner, the apparatus further includes:
and the input module is configured to input the data to be classified into the third data classification model when the data to be classified is classified, so that a classification result of the data is obtained.
In the embodiment of the disclosure, a second data classification model is obtained by training a first data classification model in a first iteration number based on a plurality of first sample data in an annular training mode; in the second iteration number, based on a plurality of second sample data, the tree-shaped training mode is used for training the second data classification model to obtain a third data classification model, when data classification is carried out, the annular training mode can save a large amount of time compared with the traditional training mode, meanwhile, the accuracy of the data classification model obtained through training is guaranteed by combining the tree-shaped training mode, and the training efficiency of the data classification model is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 12 is a block diagram illustrating an electronic device in accordance with an example embodiment. The electronic device 1200 may be a portable mobile electronic device, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Electronic device 1200 may also be referred to by other names such as user equipment, portable electronic devices, laptop electronic devices, desktop electronic devices, and the like.
In general, the electronic device 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is used to store at least one instruction for execution by processor 1201 to implement a method of data classification model training provided by method embodiments in the present disclosure.
In some embodiments, the electronic device 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, touch display 1205, camera 1206, audio circuitry 1207, pointing component 1208, and power source 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1204 may communicate with other electronic devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks.
The display screen 1205 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the electronic device 1200; in other embodiments, the display panels 1205 can be at least two, respectively disposed on different surfaces of the electronic device 1200 or in a folded design; in still other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 1200. Even further, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of an electronic apparatus, and a rear camera is disposed on a rear surface of the electronic apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is used to locate a current geographic Location of the electronic device 1200 to implement navigation or LBS (Location Based Service). The Positioning component 1208 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 1209 is used to supply power to various components in the electronic device 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 may detect magnitudes of accelerations on three coordinate axes of a coordinate system established with the electronic apparatus 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the electronic device 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the electronic device 1200 in cooperation with the acceleration sensor 1211. The processor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1213 may be disposed on a side bezel of the electronic device 1200 and/or on an underlying layer of the touch display 1205. When the pressure sensor 1213 is disposed on a side frame of the electronic device 1200, a user's holding signal to the electronic device 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the touch display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1214 is used for collecting a fingerprint of the user, and the processor 1201 identifies the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 1201 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1214 may be disposed on the front, back, or side of the electronic device 1200. When a physical button or vendor Logo is provided on the electronic device 1200, the fingerprint sensor 1214 may be integrated with the physical button or vendor Logo.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the touch display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
The proximity sensor 1216, also called a distance sensor, is typically disposed on the front panel of the electronic device 1200. The proximity sensor 1216 is used to collect the distance between the user and the front of the electronic device 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front of the electronic device 1200 gradually decreases, the processor 1201 controls the touch display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the electronic device 1200 gradually becomes larger, the processor 1201 controls the touch display 1205 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not limiting of electronic device 1200 and may include more or fewer components than shown, or combine certain components, or employ a different arrangement of components.
The disclosed embodiments also provide a non-transitory computer-readable storage medium for an electronic device, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the instruction, the program, the set of codes, or the set of instructions is loaded by a processor and executed to implement the method for training a data classification model of the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. A method for training a data classification model, the method comprising:
acquiring a plurality of first sample data, a plurality of second sample data, a first iteration number and a second iteration number, wherein the sum of the first iteration number and the second iteration number is the total iteration number of model training, the first sample data is used for training a first data classification model, and the second sample data is used for training a second data classification model;
training the first data classification model through a plurality of first training machines and the plurality of first sample data to obtain a first model parameter obtained by training each first training machine;
transmitting the first model parameters obtained by each first training machine according to the annular connection sequence of each first training machine, so that each first training machine obtains the first model parameters obtained by other first training machines;
for each first training machine, determining a first learning rate of the first data classification model corresponding to current iteration training, and determining a second learning rate of the first training machine;
in each iteration process, calculating a first model parameter of the first training machine, first model parameters of other first training machines acquired by the first training machine and the plurality of first sample data according to the first learning rate and the second learning rate, updating the second data classification model according to the calculation result, and repeating the process of each iteration until the number of iterations reaches the first number of iterations to obtain the second data classification model;
and in the second iteration times, training the second data classification model by using a tree training mode based on the plurality of second sample data to obtain a third data classification model.
2. The method of claim 1, wherein the training the first data classification model through a plurality of first training machines and the plurality of first sample data to obtain first model parameters trained by each first training machine comprises:
dividing the plurality of first sample data into a first number of sample data groups, each sample data group including at least one first sample data, the first number being the number of the plurality of first training machines;
for each first training machine of each iteration, selecting one sample data set from the first number of sample data sets that is not assigned to the first training machine;
and performing iterative training on the first data classification model through the first training machine and the sample data set to obtain the first model parameter.
3. The method of claim 1, wherein determining a first learning rate for training the corresponding first data classification model for a current iteration comprises:
when the iteration number of the current iteration training is zero, taking an initial learning rate as a first learning rate of the current iteration training;
when the iteration times of the current iteration training are not zero and the current iteration times are within a third iteration time, obtaining a third learning rate of the previous iteration, and linearly increasing the third learning rate to obtain a first learning rate of the current iteration training, wherein the third iteration time is smaller than the first iteration time;
and when the iteration number of the current iteration training is within a fourth iteration number, obtaining a third learning rate of the previous iteration, and attenuating the third learning rate by using a polynomial attenuation strategy to obtain a first learning rate of the current iteration training, wherein the fourth iteration number is greater than the third iteration number and smaller than the first iteration number.
4. The method of claim 1, wherein the determining a second learning rate for the first training machine comprises:
determining a network layer where the first training machine is located, the weight of the network layer and the gradient of the network layer;
determining a second learning rate of the first training machine according to the network layer, the weight of the network layer, the gradient of the network layer and a first model parameter of the first training machine;
wherein the second learning rate is positively correlated with the weights of the network layer and the first model parameters of the first training machine, and the second learning rate is negatively correlated with the gradient of the network layer.
5. The method according to claim 1, wherein the training the second data classification model in a tree training manner based on the plurality of second sample data within the second number of iterations to obtain a third data classification model comprises:
training the second data classification model through a plurality of second training machines and the plurality of second sample data to obtain second model parameters obtained by training each second training machine;
transmitting the second model parameters of the plurality of second training machines to a summarizing machine, and determining third model parameters through the summarizing machine based on the second model parameters obtained by training each second training machine;
sending the third model parameter to each second training machine through the summarizing machine;
and for each second training machine, performing iterative training on the second data classification model according to the third model parameter and the plurality of second sample data until the iteration times reach the second iteration times to obtain the third data classification model.
6. The method of claim 5, wherein iteratively training the second data classification model according to the third model parameter and the plurality of second sample data until the number of iterations reaches the second number of iterations, to obtain the third data classification model, comprises:
determining a fourth learning rate of the second data classification model during current iterative training;
and in each iteration process, calculating the third model parameter and the plurality of second sample data according to the fourth learning rate, updating the second data classification model according to the calculation result, and repeating the process of each iteration until the iteration number reaches the second iteration number to obtain the third data classification model.
7. The method of claim 6, wherein determining the fourth learning rate of the second data classification model when trained at the current iteration comprises:
acquiring a fifth learning rate of the second data classification model, wherein the fifth learning rate is a learning rate at the end of training the first data classification model by using an annular training mode;
when the number of iterations of the current iterative training is zero, determining a ratio of the fifth learning rate to a second number as the fourth learning rate, the second number being the number of the plurality of second training machines;
and when the iteration number of the current iterative training is not zero, obtaining a sixth learning rate of the previous iteration, and attenuating the sixth learning rate by using a polynomial attenuation strategy to obtain the fourth learning rate of the current iterative training.
8. The method according to any one of claims 1 to 7, wherein the circular training mode is a training mode that trains by using a ring-reduce algorithm.
9. The method of claim 1, further comprising:
and when the data to be classified is classified, inputting the data to be classified into the third data classification model to obtain a classification result of the data.
10. An apparatus for training a data classification model, the apparatus comprising:
the acquisition module is configured to acquire a plurality of first sample data, a plurality of second sample data, a first iteration number and a second iteration number, wherein the sum of the first iteration number and the second iteration number is the total iteration number of model training, the first sample data is used for training a first data classification model, and the second sample data is used for training a second data classification model;
a first training module configured to train the first data classification model through a plurality of first training machines and the plurality of first sample data to obtain first model parameters trained by each first training machine; transmitting the first model parameters obtained by each first training machine according to the annular connection sequence of each first training machine, so that each first training machine obtains the first model parameters obtained by other first training machines; for each first training machine, determining a first learning rate of the first data classification model corresponding to current iteration training, and determining a second learning rate of the first training machine; in each iteration process, calculating a first model parameter of the first training machine, first model parameters of other first training machines acquired by the first training machine and the plurality of first sample data according to the first learning rate and the second learning rate, updating the second data classification model according to the calculation result, and repeating the process of each iteration until the number of iterations reaches the first number of iterations to obtain the second data classification model;
and the second training module is configured to train the second data classification model in a tree training mode based on the plurality of second sample data within the second iteration number to obtain a third data classification model.
11. The apparatus of claim 10, wherein the first training module is further configured to divide the plurality of first sample data into a first number of sample data groups, each sample data group including at least one first sample data, the first number being a number of the plurality of first training machines; for each first training machine of each iteration, selecting one sample data set from the first number of sample data sets that is not assigned to the first training machine; and performing iterative training on the first data classification model through the first training machine and the sample data set to obtain the first model parameter.
12. The apparatus of claim 10, wherein the first training module is further configured to use an initial learning rate as the first learning rate of the current iterative training when the number of iterations of the current iterative training is zero; when the iteration times of the current iteration training are not zero and the current iteration times are within a third iteration time, obtaining a third learning rate of the previous iteration, and linearly increasing the third learning rate to obtain a first learning rate of the current iteration training, wherein the third iteration time is smaller than the first iteration time; and when the iteration number of the current iteration training is within a fourth iteration number, obtaining a third learning rate of the previous iteration, and attenuating the third learning rate by using a polynomial attenuation strategy to obtain a first learning rate of the current iteration training, wherein the fourth iteration number is greater than the third iteration number and smaller than the first iteration number.
13. The apparatus of claim 10, wherein the first training module is further configured to determine a network layer in which the first training machine is located, a weight of the network layer, and a gradient of the network layer; determining a second learning rate of the first training machine according to the network layer, the weight of the network layer, the gradient of the network layer and a first model parameter of the first training machine; wherein the second learning rate is positively correlated with the weights of the network layer and the first model parameters of the first training machine, and the second learning rate is negatively correlated with the gradient of the network layer.
14. The apparatus of claim 10, wherein the second training module is further configured to train the second data classification model through a plurality of second training machines and the plurality of second sample data, resulting in second model parameters trained by each second training machine; transmitting the second model parameters of the plurality of second training machines to a summarizing machine, and determining third model parameters through the summarizing machine based on the second model parameters obtained by training each second training machine; sending the third model parameter to each second training machine through the summarizing machine; and for each second training machine, performing iterative training on the second data classification model according to the third model parameter and the plurality of second sample data until the iteration times reach the second iteration times to obtain the third data classification model.
15. The apparatus of claim 14, wherein the second training module is further configured to determine a fourth learning rate of the second data classification model when trained at a current iteration; and in each iteration process, calculating the third model parameter and the plurality of second sample data according to the fourth learning rate, updating the second data classification model according to the calculation result, and repeating the process of each iteration until the iteration number reaches the second iteration number to obtain the third data classification model.
16. The apparatus of claim 15, wherein the second training module is further configured to obtain a fifth learning rate of the second data classification model, and the fifth learning rate is a learning rate at the end of training the first data classification model using a circular training mode; when the number of iterations of the current iterative training is zero, determining a ratio of the fifth learning rate to a second number as the fourth learning rate, the second number being the number of the plurality of second training machines; and when the iteration number of the current iterative training is not zero, obtaining a sixth learning rate of the previous iteration, and attenuating the sixth learning rate by using a polynomial attenuation strategy to obtain the fourth learning rate of the current iterative training.
17. The apparatus according to any one of claims 10-16, wherein the circular training mode is a training mode that trains using a ring-reduce algorithm.
18. The apparatus of claim 10, further comprising:
and the input module is configured to input the data to be classified into the third data classification model when the data to be classified is classified, so as to obtain a classification result of the data.
19. An electronic device, comprising:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to perform the method of data classification model training of any of claims 1-9.
20. A non-transitory computer-readable storage medium having stored thereon instructions which, when executed by a processor of an electronic device, implement the method of data classification model training of any of claims 1-9.
CN201910105031.6A 2019-02-01 2019-02-01 Data classification model training method and device, electronic equipment and storage medium Active CN109816042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910105031.6A CN109816042B (en) 2019-02-01 2019-02-01 Data classification model training method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910105031.6A CN109816042B (en) 2019-02-01 2019-02-01 Data classification model training method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109816042A CN109816042A (en) 2019-05-28
CN109816042B true CN109816042B (en) 2020-11-24

Family

ID=66606332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910105031.6A Active CN109816042B (en) 2019-02-01 2019-02-01 Data classification model training method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109816042B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148470B (en) * 2019-06-28 2022-11-04 富联精密电子(天津)有限公司 Parameter synchronization method, computer device and readable storage medium
CN113723603A (en) * 2020-05-26 2021-11-30 华为技术有限公司 Method, device and storage medium for updating parameters
CN111882206B (en) * 2020-07-25 2023-11-07 广州城市职业学院 Application value evaluation method for building information model used in building engineering
CN112686171B (en) * 2020-12-31 2023-07-18 深圳市华尊科技股份有限公司 Data processing method, electronic equipment and related products
CN112862088B (en) * 2021-01-18 2023-11-07 中山大学 Distributed deep learning method based on pipeline annular parameter communication

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106297774B (en) * 2015-05-29 2019-07-09 中国科学院声学研究所 A kind of the distributed parallel training method and system of neural network acoustic model
CN106372402B (en) * 2016-08-30 2019-04-30 中国石油大学(华东) The parallel method of fuzzy region convolutional neural networks under a kind of big data environment

Also Published As

Publication number Publication date
CN109816042A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN109816042B (en) Data classification model training method and device, electronic equipment and storage medium
US11517099B2 (en) Method for processing images, electronic device, and storage medium
CN108063981B (en) Method and device for setting attributes of live broadcast room
CN110841285B (en) Interface element display method and device, computer equipment and storage medium
CN110659127A (en) Method, device and system for processing task
CN109840584B (en) Image data classification method and device based on convolutional neural network model
WO2022052620A1 (en) Image generation method and electronic device
CN108764530B (en) Method and device for configuring working parameters of oil well pumping unit
CN111005715A (en) Method and device for determining gas well yield and storage medium
CN110392375B (en) WiFi network channel modification method, terminal, server and storage medium
CN110837300B (en) Virtual interaction method and device, electronic equipment and storage medium
CN109218751A (en) The method, apparatus and system of recommendation of audio
CN110673944A (en) Method and device for executing task
CN111651693A (en) Data display method, data sorting method, device, equipment and medium
CN111813322A (en) Method, device and equipment for creating storage pool and storage medium
CN112181915B (en) Method, device, terminal and storage medium for executing service
CN108196813B (en) Method and device for adding sound effect
CN112907939B (en) Traffic control subarea dividing method and device
CN110580561B (en) Analysis method and device for oil well oil increasing effect and storage medium
CN110851435B (en) Data storage method and device
CN113935678A (en) Method, device, equipment and storage medium for determining multiple distribution terminals held by distributor
CN112132472A (en) Resource management method and device, electronic equipment and computer readable storage medium
CN111369434A (en) Method, device and equipment for generating cover of spliced video and storage medium
CN112052153A (en) Product version testing method and device
CN110533666B (en) Method for obtaining data block size, method and device for processing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant