CN109190757B - Task processing method, device, equipment and computer readable storage medium - Google Patents

Task processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN109190757B
CN109190757B CN201810893099.0A CN201810893099A CN109190757B CN 109190757 B CN109190757 B CN 109190757B CN 201810893099 A CN201810893099 A CN 201810893099A CN 109190757 B CN109190757 B CN 109190757B
Authority
CN
China
Prior art keywords
branch
task
processed
processing
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810893099.0A
Other languages
Chinese (zh)
Other versions
CN109190757A (en
Inventor
杨少雄
赵晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810893099.0A priority Critical patent/CN109190757B/en
Publication of CN109190757A publication Critical patent/CN109190757A/en
Application granted granted Critical
Publication of CN109190757B publication Critical patent/CN109190757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a task processing method, a device, equipment and a computer readable storage medium, wherein the method comprises the following steps: receiving a task to be processed; simultaneously processing the task to be processed through each branch in a neural network model, the neural network model comprising at least one branch; receiving a processing result output by each branch; and voting the processing result output by each branch according to a preset voting rule to obtain a final processing result. The same task is processed through different branches in the neural network model at the same time, so that the precision of the processing task of the neural network model can be improved.

Description

Task processing method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of computers, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for task processing.
Background
An Artificial Neural Network (ANNs) is an algorithmic mathematical model that simulates behavioral characteristics of animal Neural Networks and performs distributed parallel information processing. The network achieves the purpose of processing information by adjusting the mutual connection relationship among a large number of internal nodes depending on the complexity of the system, and has self-learning and self-adapting capabilities.
The existing neural network model generally has a plurality of different branches, and in order to make the function of the neural network model more powerful, different tasks are generally executed by the different branches.
However, since the branches in the neural network process different tasks respectively, and the required data is often different for different tasks, the multiple branches apply different underlying data respectively, which results in a large amount of computation and low computational efficiency. In addition, because different branches have different processing capacities for different tasks, the processing results obtained by the branches are often not accurate enough.
Disclosure of Invention
The invention provides a task processing method, a task processing device and a computer readable storage medium, which are used for solving the technical problem that in the prior art, each branch of a multi-branch neural network model processes different tasks, and the processing result is not accurate enough due to different branch capabilities.
The first aspect of the present invention provides a task processing method, including:
receiving a task to be processed;
simultaneously processing the task to be processed through each branch in a neural network model, the neural network model comprising at least one branch;
receiving a processing result output by each branch;
and voting the processing result output by each branch according to a preset voting rule to obtain a final processing result.
Another aspect of the present invention provides a task processing apparatus including:
the receiving module is used for receiving the tasks to be processed;
the processing module is used for simultaneously processing the tasks to be processed through each branch in a neural network model, and the neural network model comprises at least one branch;
a result receiving module, configured to receive a processing result output by each of the branches;
and the voting module is used for voting the processing result output by each branch according to a preset voting rule to obtain a final processing result.
Still another aspect of the present invention is to provide a task processing apparatus including: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the task processing method as described above by the processor.
Yet another aspect of the present invention is to provide a computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by a processor to implement the task processing method as described above.
The task processing method, the device, the equipment and the computer readable storage medium provided by the invention receive the task to be processed; simultaneously processing the task to be processed through each branch in a neural network model, the neural network model comprising at least one branch; receiving a processing result output by each branch; and voting the processing result output by each branch according to a preset voting rule to obtain a final processing result. The same task is processed through different branches in the neural network model at the same time, so that the precision of the processing task of the neural network model can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic flowchart of a task processing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a task processing method according to a second embodiment of the present invention;
fig. 3 is a schematic flowchart of a task processing method according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a task processing device according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a task processing device according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a task processing device according to a sixth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a task processing device according to a seventh embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other examples obtained based on the examples in the present invention are within the scope of the present invention.
Fig. 1 is a schematic flowchart of a task processing method according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
and step 101, receiving a task to be processed.
The execution main body of the task processing method provided by the present invention may be specifically a task processing device, and the task processing device may be implemented by hardware and/or software. In the present embodiment, the neural network model is generally used for processing tasks, and can implement any one of tasks such as recognition of an image and reconstruction of an image with noise, for example. Therefore, in order to implement processing of a task, a task to be processed may be accepted first.
Generally, the type of the neural network model includes, but is not limited to, convolutional neural network, long-short term memory neural network, deep belief network, generative countermeasure network, recurrent neural network, etc., and the present invention does not limit the type of the neural network model, and those skilled in the art can construct the neural network model according to actual needs.
Step 102, the task to be processed is processed simultaneously through each branch in a neural network model, wherein the neural network model comprises at least one branch.
The existing neural network model generally has a plurality of different branches, and in order to make the function of the neural network model more powerful, different tasks are generally executed by the different branches. However, since the branches in the neural network process different tasks respectively, and the required data is often different for different tasks, the multiple branches apply different underlying data respectively, which results in a large amount of computation and low computational efficiency. In addition, because different branches have different processing capacities for different tasks, the processing results obtained by the branches are often not accurate enough.
In this embodiment, in order to solve the above technical problem, after receiving the task to be processed, the task to be processed may be simultaneously processed through each branch in the neural network, specifically, at least one branch is included in the neural network model. By simultaneously processing the same task through each branch in the neural network model, the identification precision of the neural network model can be effectively improved. In addition, when each branch in the neural network model processes the same task to be processed, the same task to be processed can be processed by each branch only by acquiring the bottom layer data once, so that the calculation amount of the neural network model can be greatly reduced, and the processing efficiency of the neural network model is improved.
And 103, receiving the processing result output by each branch.
In this embodiment, each branch in the neural network model can output one processing result after processing the task to be processed, and therefore, after the task to be processed is simultaneously processed by each branch in the neural network, the processing result output by each branch can be received.
And 104, voting the processing result output by each branch according to a preset voting rule to obtain a final processing result.
In the present embodiment, after the processing results for each branch output are acquired, the processing results may be integrated in order to improve the accuracy of the acquired results. Specifically, the processing result output by each branch may be voted according to a preset voting rule to obtain a final processing result, and a subsequent operation may be performed according to the final processing result. Specifically, the voting rule may be a voting rule set by the user, specifically, it may be an average value of processing results of each branch, or it may also be majority voting, and the like, which is not limited herein.
For example, in the prior art, a gesture recognition model often performs different tasks through each branch in a neural network model, which results in low recognition accuracy. Therefore, the same task can be simultaneously executed through a plurality of branches in the gesture recognition model, and therefore the recognition accuracy of the gesture recognition model can be improved.
The task processing method provided by the embodiment receives the task to be processed; simultaneously processing the task to be processed through each branch in a neural network model, the neural network model comprising at least one branch; receiving a processing result output by each branch; and voting the processing result output by each branch according to a preset voting rule to obtain a final processing result. The same task is processed through different branches in the neural network model at the same time, so that the precision of the processing task of the neural network model can be improved.
Further, on the basis of the above embodiment, the method includes:
receiving a task to be processed;
simultaneously processing the task to be processed through each branch in a neural network model, the neural network model comprising at least one branch;
receiving a processing result output by each branch;
and calculating the average value of the processing results output by all the branches, and taking the average value as the final processing result.
In this embodiment, the averaging method may be used as a preset voting rule. Specifically, the task to be processed may be received and processed through each branch in the neural network model at the same time, where the neural network model includes at least one branch. After the processing is completed, the processing result of each branch output is received. For each branch, the result of the task output processed by the branch may be single data, so the sum of the processing results output by each branch can be calculated, and the average value of the processing results output by all branches is obtained by dividing the sum of the processing results by the number of branches, and the average value is used as the final processing result, and the subsequent operation is performed according to the final processing result. It should be noted that, since the processing results are averaged after the processing results output from each branch are received, the processing accuracy of the neural network model can be further improved since the average is an index reflecting the trend in the data set.
The task processing method provided by this embodiment can further improve the processing accuracy of the neural network model by averaging the processing results after receiving the processing results output by each branch.
Further, on the basis of any of the above embodiments, each of the branch output results includes at least one data type, and the data types of each of the branch outputs are consistent; the method comprises the following steps:
receiving a task to be processed;
simultaneously processing the task to be processed through each branch in a neural network model, the neural network model comprising at least one branch;
receiving a processing result output by each branch;
and calculating the average value of the data types output by all the branches aiming at each data type, and taking the average value of the data types as the final processing result.
In this embodiment, for each branch, the output result of the processed task may be single data, or may include a set of data of multiple data types, and since each branch processes the same data, each branch outputs a set of data, and the data types in each set of data correspond to each other. Accordingly, the averaging method may be used as a preset voting rule. Specifically, the task to be processed may be received and processed through each branch in the neural network model at the same time, where the neural network model includes at least one branch. After the processing is completed, the processing result of each branch output is received. And aiming at a group of data output by each branch, acquiring a processing result corresponding to the type output by other branches aiming at each data type in the data group, calculating the sum of the processing results corresponding to the data type, dividing the sum of the processing results by the number of the branches to obtain the average value of the data types output by the branches, and taking the average value of the data types as a final processing result. And repeating the steps for each data type until the mean value of each data type is calculated, and taking the mean value of each data type as a final processing result.
In the task processing method provided by this embodiment, after receiving the processing result of each branch output, the average value of the data types of all the branch outputs is calculated for each data type, and the average value of the data types is used as the final processing result, so that the processing accuracy of the neural network model can be further improved.
Further, on the basis of any of the above embodiments, the method comprises:
receiving a task to be processed;
processing the task to be processed by adopting different methods through each branch in the neural network model;
receiving a processing result output by each branch;
and voting the processing result output by each branch according to a preset voting rule to obtain a final processing result.
In the present embodiment, the existing neural network model generally has a plurality of different branches, and in order to make the neural network model more powerful, different tasks are generally performed by different branches. However, since different branches have different processing capabilities for different tasks, the processing results obtained by the branches are often not accurate enough. Therefore, the task to be processed can be received and processed by adopting different methods through each branch in the neural network model. Specifically, each branch processes one to-be-processed task through different methods, so that the technical problem of inaccurate processing results caused by different processing capacities of different branches for different tasks can be effectively solved, and the processing precision of the to-be-processed task can be further improved. After the processing is completed, the processing result of each branch output is received. And voting the processing result output by each branch according to a preset voting rule to obtain a final processing result so as to perform operation according to the processing result.
According to the task processing method provided by the embodiment, each branch processes one task to be processed through different methods, so that the technical problem that processing results are not accurate due to different branches have different processing capacities for different tasks can be effectively solved, and the processing accuracy of the task to be processed can be further improved.
Fig. 2 is a schematic flow chart of a task processing method according to a second embodiment of the present invention, and based on any of the above embodiments, as shown in fig. 2, the method includes:
step 201, receiving a task to be processed;
step 202, acquiring bottom layer data corresponding to the task to be processed;
step 203, sending the bottom layer data to each branch so that each branch processes the task to be processed according to the bottom layer data;
step 204, processing the task to be processed simultaneously through each branch in a neural network model, wherein the neural network model comprises at least one branch;
step 205, receiving the processing result output by each branch;
and step 206, voting the processing result output by each branch according to a preset voting rule to obtain a final processing result.
The existing neural network model generally has a plurality of different branches, and in order to make the function of the neural network model more powerful, different tasks are generally executed by the different branches. However, since the branches in the neural network process different tasks respectively, and the required data is often different for different tasks, the multiple branches apply different underlying data respectively, which results in a large amount of computation and low computational efficiency.
In this embodiment, different branches process the same to-be-processed task at the same time, and therefore, for the same to-be-processed task, the required bottom layer data is consistent, and therefore, after receiving the to-be-processed task, the bottom layer data corresponding to the to-be-processed task can be acquired and sent to each branch, and therefore each branch can process the to-be-processed task according to the bottom layer data. Because the bottom layer data is obtained only once, the calculated amount of the neural network model can be greatly reduced, and the processing efficiency of the neural network model can be improved. The task to be processed is processed simultaneously by each branch in the neural network model, which comprises at least one branch. After the processing is completed, the processing result of each branch output is received. And voting the processing result output by each branch according to a preset voting rule to obtain a final processing result so as to perform operation according to the processing result.
In the task processing method provided by this embodiment, the bottom layer data corresponding to the task to be processed is obtained, and the bottom layer data is sent to each branch, so that each branch can process the task to be processed according to the bottom layer data. Because the bottom layer data is obtained only once, the calculated amount of the neural network model can be greatly reduced, and the processing efficiency of the neural network model can be improved.
Fig. 3 is a schematic flow chart of a task processing method according to a third embodiment of the present invention, where on the basis of any of the foregoing embodiments, as shown in fig. 3, the method includes:
step 301, receiving a task to be processed;
step 302, determining the type of the task to be processed;
step 303, determining the capability of each branch to process the current task to be processed according to the type of the task to be processed;
step 304, setting a weight for each branch according to the processing capacity of each branch;
step 305, processing the task to be processed simultaneously through each branch in a neural network model, wherein the neural network model comprises at least one branch;
step 306, receiving the processing result output by each branch;
and 307, voting the processing result output by each branch according to a preset voting rule and the weight of each branch to obtain a final processing result.
In this embodiment, since different branches in the neural network model have different processing capacities for different tasks to be processed, after receiving the tasks to be processed, first, the type of the tasks to be processed may be determined, the capacity of each branch for processing the tasks to be processed may be determined according to the type of the tasks to be processed, and different weights may be set for each branch according to the different capacities. It can be understood that if a branch has a stronger ability to process the to-be-processed task, a higher weight value may be set for the to-be-processed task, and if a branch has a weaker ability to process the to-be-processed task, a lower weight value may be set for the to-be-processed task, so that a result obtained subsequently can be more accurate. And simultaneously processing the tasks to be processed through each branch in the neural network model, and receiving a processing result output by each branch after the processing is finished. And voting the processing result output by each branch according to a preset voting rule and the weight value of each branch to obtain a final processing result so as to perform subsequent operation according to the processing result. Specifically, the processing result output by each branch may be first multiplied by the weight value, and the average value of the multiplied results may be calculated as the final processing result.
According to the task processing method provided by the embodiment, after the to-be-processed task is received, the type of the to-be-processed task can be determined, the capacity of each branch for processing the to-be-processed task is determined according to the type of the to-be-processed task, and different weights are set for each branch according to different capacities, so that the processing result is not limited by the processing capacity of each branch, and the processing precision of the neural network model is further improved.
Further, on the basis of any of the above embodiments, the method further includes:
establishing a model to be trained according to a preset hyper-parameter;
training the model to be trained according to prestored data to be trained to obtain the neural network model;
receiving a task to be processed;
simultaneously processing the task to be processed through each branch in a neural network model, the neural network model comprising at least one branch;
receiving a processing result output by each branch;
and voting the processing result output by each branch according to a preset voting rule to obtain a final processing result.
In this embodiment, in order to implement processing on a task to be processed according to a neural network model, the neural network model needs to be established first. Generally, the type of the neural network model includes, but is not limited to, convolutional neural network, long-short term memory neural network, deep belief network, generative countermeasure network, recurrent neural network, etc., and the present invention does not limit the type of the neural network model, and those skilled in the art can construct the neural network model according to actual needs.
Specifically, a model to be trained may be established according to a preset hyper-parameter, and the model to be trained is trained through preset data to be trained, so as to obtain a neural network model. For example, if the model is a gesture recognition model, the data to be trained is images of multiple gestures, and if the model is a face recognition model, the data to be trained is images of multiple faces. After the neural network model is established, the tasks to be processed can be received, the tasks to be processed are simultaneously processed through at least one branch in the neural network model, after the processing is finished, the processing result output by each branch is received, the processing result output by each branch is voted according to a preset voting rule, and a final processing result is obtained.
According to the task processing method provided by the embodiment, before the task to be processed is processed, the neural network model is established first, so that a foundation is provided for subsequent task processing.
Fig. 4 is a schematic structural diagram of a task processing device according to a fourth embodiment of the present invention, and as shown in fig. 4, the task processing device includes:
and the receiving module 41 is used for receiving the task to be processed.
A processing module 42, configured to process the task to be processed simultaneously through each branch in a neural network model, where the neural network model includes at least one branch.
A result receiving module 43, configured to receive the processing result output by each branch.
And the voting module 44 is configured to vote for the processing result output by each branch according to a preset voting rule, so as to obtain a final processing result.
For example, in the prior art, a gesture recognition model often performs different tasks through each branch in a neural network model, which results in low recognition accuracy. Therefore, the same task can be simultaneously executed through a plurality of branches in the gesture recognition model, and therefore the recognition accuracy of the gesture recognition model can be improved.
The task processing device provided by the embodiment receives the task to be processed; simultaneously processing the task to be processed through each branch in a neural network model, the neural network model comprising at least one branch; receiving a processing result output by each branch; and voting the processing result output by each branch according to a preset voting rule to obtain a final processing result. The same task is processed through different branches in the neural network model at the same time, so that the precision of the processing task of the neural network model can be improved.
Further, on the basis of the above embodiment, the apparatus includes:
the receiving module is used for receiving the tasks to be processed;
the processing module is used for simultaneously processing the tasks to be processed through each branch in a neural network model, and the neural network model comprises at least one branch;
a result receiving module, configured to receive a processing result output by each of the branches;
the voting module specifically comprises:
and the calculating unit is used for calculating the average value of the processing results output by all the branches, and taking the average value as the final processing result.
The task processing device provided in this embodiment can further improve the processing accuracy of the neural network model by averaging the processing results after receiving the processing results output by each branch.
Further, on the basis of any of the above embodiments, each of the branch output results includes at least one data type, and the data types of each of the branch outputs are consistent; the information includes:
the receiving module is used for receiving the tasks to be processed;
the computing unit specifically includes:
the average value operator unit is used for simultaneously processing the tasks to be processed through each branch in a neural network model, and the neural network model comprises at least one branch;
a result receiving module, configured to receive a processing result output by each of the branches;
the voting module specifically comprises:
and the calculating unit is used for calculating the average value of the data types output by all the branches aiming at each data type and taking the average value of the data types as the final processing result.
In the task processing device provided by this embodiment, after receiving the processing result of each branch output, the average value of the data types of all the branch outputs is calculated for each data type, and the average value of the data types is used as the final processing result, so that the processing accuracy of the neural network model can be further improved.
Further, on the basis of any of the above embodiments, the apparatus comprises:
the receiving module is used for receiving the tasks to be processed;
the processing module specifically comprises:
the to-be-processed task processing unit is used for processing the to-be-processed task by adopting different methods through each branch in the neural network model;
a result receiving module, configured to receive a processing result output by each of the branches;
and the voting module is used for voting the processing result output by each branch according to a preset voting rule to obtain a final processing result.
The task processing device provided by the embodiment processes one to-be-processed task through different methods for each branch, so that the technical problem that processing results are not accurate due to different branches have different processing capacities for different tasks can be effectively solved, and the processing accuracy of the to-be-processed task can be further improved.
Fig. 5 is a schematic structural diagram of a task processing device according to a fifth embodiment of the present invention, where on the basis of any of the foregoing embodiments, as shown in fig. 5, the task processing device includes:
an accepting module 51, configured to accept a task to be processed;
a bottom layer data obtaining module 52, configured to obtain bottom layer data corresponding to the task to be processed;
a sending module 53, configured to send the bottom layer data to each branch, so that each branch processes the to-be-processed task according to the bottom layer data;
a processing module 54, configured to process the task to be processed through each branch in a neural network model, where the neural network model includes at least one branch;
a result receiving module 55, configured to receive a processing result output by each of the branches;
and the voting module 56 is configured to vote for the processing result output by each branch according to a preset voting rule, so as to obtain a final processing result.
The task processing device provided in this embodiment obtains the bottom layer data corresponding to the task to be processed, and sends the bottom layer data to each branch, so that each branch can process the task to be processed according to the bottom layer data. Because the bottom layer data is obtained only once, the calculated amount of the neural network model can be greatly reduced, and the processing efficiency of the neural network model can be improved.
Fig. 6 is a schematic structural diagram of a task processing device according to a sixth embodiment of the present invention, where on the basis of any of the foregoing embodiments, as shown in fig. 6, the task processing device includes:
the receiving module 61 is used for receiving the tasks to be processed;
a type determining module 62, configured to determine a type of the task to be processed;
a capability determining module 63, configured to determine, according to the type of the task to be processed, a capability of each branch to process the current task to be processed;
a weight setting module 64, configured to set a weight for each of the branches according to a processing capability of each of the branches;
a processing module 65, configured to process the task to be processed simultaneously through each branch in a neural network model, where the neural network model includes at least one branch;
a result receiving module 66, configured to receive the processing result output by each branch;
correspondingly, the voting module 67 specifically includes:
the voting unit 601 is configured to vote for the processing result output by each branch according to a preset voting rule and the weight of each branch, so as to obtain a final processing result.
After receiving the task to be processed, the task processing device provided in this embodiment may first determine the type of the task to be processed, determine the capability of each branch to process the task to be processed according to the type of the task to be processed, and set different weights for each branch according to different capabilities, so that the processing result is not limited by the processing capability of each branch, and the processing accuracy of the neural network model is further improved.
Further, on the basis of any one of the above embodiments, the apparatus further includes:
the model establishing module is used for establishing a model to be trained according to a preset hyper-parameter;
the training module is used for training the model to be trained according to prestored data to be trained to obtain the neural network model;
the receiving module is used for receiving the tasks to be processed;
the processing module is used for simultaneously processing the tasks to be processed through each branch in a neural network model, and the neural network model comprises at least one branch;
a result receiving module, configured to receive a processing result output by each of the branches;
and the voting module is used for voting the processing result output by each branch according to a preset voting rule to obtain a final processing result.
The task processing device provided by this embodiment establishes the neural network model first before processing the task to be processed, and provides a basis for subsequent task processing.
Fig. 7 is a schematic structural diagram of a task processing device according to a seventh embodiment of the present invention, and as shown in fig. 7, the task processing device includes: a memory 71, a processor 72;
a memory 71; a memory 71 for storing instructions executable by the processor 72;
wherein the processor 72 is configured to execute the task processing method as described above by the processor 72.
Yet another embodiment of the present invention provides a computer-readable storage medium, wherein the computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by a processor to implement the task processing method as described above.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (16)

1. A task processing method, comprising:
receiving a task to be processed; wherein, the task to be processed is an image to be processed;
simultaneously processing the task to be processed through each branch in a neural network model, wherein the neural network model comprises at least two branches; the neural network model is a gesture recognition model, and the image to be processed is a gesture image, or the neural network model is a face recognition model, and the image to be processed is a face image;
receiving a processing result output by each branch;
and voting the processing result output by each branch according to a preset voting rule to obtain a final processing result, wherein the final processing result is an image identification result or an image reconstruction result.
2. The method according to claim 1, wherein voting the processing result output from each branch according to a preset voting rule to obtain a final processing result comprises:
and calculating the average value of the processing results output by all the branches, and taking the average value as the final processing result.
3. The method of claim 2, wherein each of the branch output results comprises at least one data type, and the data types of each of the branch outputs are consistent;
accordingly, the calculating an average value of the processing results of all the branch outputs, taking the average value as the final processing result, includes:
and calculating the average value of the processing results corresponding to the data type and output by all the branches aiming at each data type, and taking the average value of the processing results corresponding to each data type as the final processing result.
4. The method of claim 1, wherein the processing the task to be processed simultaneously through each branch in the neural network model comprises:
and processing the task to be processed by adopting different methods through each branch in the neural network model.
5. The method of claim 1, wherein before simultaneously processing the task to be processed through each branch of the neural network model, further comprising:
acquiring bottom layer data corresponding to the task to be processed;
and sending the bottom layer data to each branch so that each branch processes the task to be processed according to the bottom layer data.
6. The method of claim 1, wherein before simultaneously processing the task to be processed through each branch of the neural network model, further comprising:
determining the type of the task to be processed;
determining the capability of each branch for processing the current task to be processed according to the type of the task to be processed;
setting a weight for each branch according to the processing capacity of each branch;
correspondingly, the voting is performed on the processing result output by each branch according to a preset voting rule to obtain a final processing result, and the method comprises the following steps:
and voting the processing result output by each branch according to a preset voting rule and the weight of each branch to obtain a final processing result.
7. The method according to any one of claims 1-6, wherein before accepting the pending task, further comprising:
establishing a model to be trained according to a preset hyper-parameter;
and training the model to be trained according to the pre-stored data to be trained to obtain the neural network model.
8. A task processing apparatus, comprising:
the receiving module is used for receiving the tasks to be processed; wherein, the task to be processed is an image to be processed;
the processing module is used for simultaneously processing the tasks to be processed through each branch in a neural network model, and the neural network model comprises at least two branches; the neural network model is a gesture recognition model, and the image to be processed is a gesture image, or the neural network model is a face recognition model, and the image to be processed is a face image;
a result receiving module, configured to receive a processing result output by each of the branches;
and the voting module is used for voting the processing result output by each branch according to a preset voting rule to obtain a final processing result, wherein the final processing result is an image identification result or an image reconstruction result.
9. The apparatus of claim 8, wherein the voting module comprises:
and the calculating unit is used for calculating the average value of the processing results output by all the branches, and taking the average value as the final processing result.
10. The apparatus of claim 9, wherein each of the branch output results comprises at least one data type, and the data types of each of the branch outputs are consistent;
accordingly, the calculation unit comprises:
and the average value operator unit is used for calculating the average value of the processing results corresponding to the data type and output by all the branches aiming at each data type, and taking the average value of the processing results corresponding to each data type as the final processing result.
11. The apparatus of claim 8, wherein the processing module comprises:
and the to-be-processed task processing unit is used for processing the to-be-processed task by adopting different methods through each branch in the neural network model.
12. The apparatus of claim 8, further comprising:
the bottom layer data acquisition module is used for acquiring bottom layer data corresponding to the task to be processed;
and the sending module is used for sending the bottom layer data to each branch so that each branch processes the task to be processed according to the bottom layer data.
13. The apparatus of claim 8, further comprising:
the type determining module is used for determining the type of the task to be processed;
the capacity determining module is used for determining the capacity of each branch for processing the current task to be processed according to the type of the task to be processed;
the weight setting module is used for setting weight for each branch according to the processing capacity of each branch;
accordingly, the voting module comprises:
and the voting unit is used for voting the processing result output by each branch according to a preset voting rule and the weight of each branch to obtain a final processing result.
14. The apparatus of any one of claims 8-13, further comprising:
the model establishing module is used for establishing a model to be trained according to a preset hyper-parameter;
and the training module is used for training the model to be trained according to pre-stored data to be trained to obtain the neural network model.
15. A task processing device comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to perform the task processing method of any one of claims 1-7 by the processor.
16. A computer-readable storage medium having stored therein computer-executable instructions for implementing the task processing method of any one of claims 1 to 7 when executed by a processor.
CN201810893099.0A 2018-08-07 2018-08-07 Task processing method, device, equipment and computer readable storage medium Active CN109190757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810893099.0A CN109190757B (en) 2018-08-07 2018-08-07 Task processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810893099.0A CN109190757B (en) 2018-08-07 2018-08-07 Task processing method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109190757A CN109190757A (en) 2019-01-11
CN109190757B true CN109190757B (en) 2021-05-04

Family

ID=64921020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810893099.0A Active CN109190757B (en) 2018-08-07 2018-08-07 Task processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109190757B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675378B (en) * 2019-09-23 2022-04-08 赵晖 Image identification method and system for stability of spinal metastasis tumor
CN113051069B (en) * 2019-12-28 2023-12-08 华为技术有限公司 Data analysis method and device based on multitasking and terminal equipment
CN112446439B (en) * 2021-01-29 2021-04-23 魔视智能科技(上海)有限公司 Inference method and system for deep learning model dynamic branch selection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273784A (en) * 2016-04-01 2017-10-20 富士施乐株式会社 Image steganalysis apparatus and method
CN107832219A (en) * 2017-11-13 2018-03-23 北京航空航天大学 The construction method of software fault prediction technology based on static analysis and neutral net

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4208485B2 (en) * 2001-05-31 2009-01-14 キヤノン株式会社 Pulse signal processing circuit, parallel processing circuit, pattern recognition device, and image input device
US9965717B2 (en) * 2015-11-13 2018-05-08 Adobe Systems Incorporated Learning image representation by distilling from multi-task networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273784A (en) * 2016-04-01 2017-10-20 富士施乐株式会社 Image steganalysis apparatus and method
CN107832219A (en) * 2017-11-13 2018-03-23 北京航空航天大学 The construction method of software fault prediction technology based on static analysis and neutral net

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel weighted voting algorithm based on neural networks for fault-tolerant systems;Faraneh Zarafshan;《2010 3rd international conference on computer science and information technology》;20101231;135-139 *
基于SimpleScalar的动态分支预测器研究;张筱;《基于SimpleScalar的动态分支预测器研究》;20111123;19-21 *

Also Published As

Publication number Publication date
CN109190757A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
US11270190B2 (en) Method and apparatus for generating target neural network structure, electronic device, and storage medium
CN108615071B (en) Model testing method and device
CN109190757B (en) Task processing method, device, equipment and computer readable storage medium
CN109685097B (en) Image detection method and device based on GAN
CN110135582B (en) Neural network training method, neural network training device, image processing method, image processing device and storage medium
WO2022027913A1 (en) Target detection model generating method and apparatus, device and storage medium
CN109102017B (en) Neural network model processing method, device, equipment and readable storage medium
CN111695624B (en) Updating method, device, equipment and storage medium of data enhancement strategy
CN114186632A (en) Method, device, equipment and storage medium for training key point detection model
US20200342315A1 (en) Method, device and computer program for creating a deep neural network
CN110824587A (en) Image prediction method, image prediction device, computer equipment and storage medium
CN114091554A (en) Training set processing method and device
CN111950633A (en) Neural network training method, neural network target detection method, neural network training device, neural network target detection device and storage medium
CN111652371A (en) Offline reinforcement learning network training method, device, system and storage medium
CN112836820A (en) Deep convolutional network training method, device and system for image classification task
CN107844803B (en) Picture comparison method and device
CN109034176B (en) Identification system and identification method
CN116385369A (en) Depth image quality evaluation method and device, electronic equipment and storage medium
CN113269812B (en) Training and application method, device, equipment and storage medium of image prediction model
CN114820755A (en) Depth map estimation method and system
CN110647805B (en) Reticulate pattern image recognition method and device and terminal equipment
US20200184331A1 (en) Method and device for processing data through a neural network
CN110705437A (en) Face key point detection method and system based on dynamic cascade regression
CN112883988B (en) Training and feature extraction method of feature extraction network based on multiple data sets
CN116450187B (en) Digital online application processing method and AI application system applied to AI analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant