CN107844371A - Task processing method, system and electronic equipment - Google Patents

Task processing method, system and electronic equipment Download PDF

Info

Publication number
CN107844371A
CN107844371A CN201710951710.6A CN201710951710A CN107844371A CN 107844371 A CN107844371 A CN 107844371A CN 201710951710 A CN201710951710 A CN 201710951710A CN 107844371 A CN107844371 A CN 107844371A
Authority
CN
China
Prior art keywords
task
resource object
parameters
resource
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710951710.6A
Other languages
Chinese (zh)
Inventor
吕江昭
徐新坤
鲍永成
刘海锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201710951710.6A priority Critical patent/CN107844371A/en
Publication of CN107844371A publication Critical patent/CN107844371A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Stored Programmes (AREA)

Abstract

Present disclose provides a kind of task processing method, including, the task type and task parameters of acquisition task, according to the task type, resource object template is called, and the resource object template is updated according to the task parameters, according to the resource object template after the renewal, at least one resource object is generated, and according to the resource object, the resource for asking the required by task to be wanted.

Description

Task processing method, system and electronic equipment
Technical field
This disclosure relates to field of computer technology, more particularly, to task processing method, system and electronic equipment.
Background technology
Distributed Calculation can share scarce resource, balance the computational load of multiple stage computers, and program can be transported Row is widely used in performing various calculating tasks on most suitable computer.By taking deep learning as an example, due to training During amount of calculation it is huge, it is necessary to handled by Distributed Calculation, by coordinating computing resource, complete training.The opposing party Face, the appearance of deep learning framework reduce the development difficulty of artificial intelligence and deep learning, application and development to a certain extent Personnel can select suitable deep learning framework to be developed.
During present inventive concept is realized, inventor has found that at least there are the following problems in the prior art, using opening Hair personnel are in the task required for selection, it is necessary to which labor management and calling underlying resource, can face higher selection and study Cost.
The content of the invention
In view of this, present disclose provides a kind of task processing side for supporting the management of multiple-task automatic resource and calling Method, system and device.
An aspect of this disclosure provides a kind of task processing method, including, obtain the task type and task of task Parameter, according to the task type, resource object template is called, and the resource object mould is updated according to the task parameters Plate, according to the resource object template after the renewal, at least one resource object is generated, and according to the resource object, please The resource for asking the required by task to want.
In accordance with an embodiment of the present disclosure, after the task type of the acquisition task and task parameters, methods described is also Including being pre-processed to the task.
In accordance with an embodiment of the present disclosure, it is described to the task carry out pretreatment include following at least one, according to described Task parameters, parse required dependence program and installed, according to the task parameters, standardize number to be dealt with According to, or optimization mission script.
In accordance with an embodiment of the present disclosure, the task type of the acquisition task and task parameters include, and receive user and make by oneself The mirror image of justice, the task type and task parameters of the task are extracted from the mirror image.
In accordance with an embodiment of the present disclosure, the task includes distributed training mission, and the task type includes institute State enforcement engine used in distributed training.
Another aspect of the disclosure provides a kind of task processing system, including acquisition module, for obtaining task Task type and task parameters, template calling module, for according to the task type, calling resource object template, and according to The task parameters update the resource object template, object generation module, for according to the resource object mould after the renewal Plate, at least one resource object, and resource request module are generated, for according to the resource object, asking the task institute The resource needed.
In accordance with an embodiment of the present disclosure, the system also includes pretreatment module, for being pre-processed to the task.
In accordance with an embodiment of the present disclosure, the pretreatment module includes following at least one, submodule is installed, for basis The task parameters, parse required dependence program and installed, standardize submodule, for being joined according to the task Number, data to be dealt with of standardizing, or optimization submodule, for optimizing mission script.
In accordance with an embodiment of the present disclosure, the acquisition module includes, extracting sub-module, for receiving user-defined mirror Picture, the task type and task parameters of the task are extracted from the mirror image.
In accordance with an embodiment of the present disclosure, the task includes distributed training mission, and the task type includes institute State enforcement engine used in distributed training.
Another aspect of the disclosure provides a kind of electronic equipment, including one or more processors, storage device, uses In the one or more programs of storage, wherein, when one or more of programs are by one or more of computing devices, make Obtain the method described in one or more of above-mentioned middle any one of computing device.
Another aspect of the present disclosure provides a kind of computer-readable medium, is stored thereon with executable instruction, the finger Order is used to realize method as described above when executed.
Another aspect of the present disclosure provides a kind of computer program, and the computer program includes the executable finger of computer Order, the instruction are used to realize method as described above when executed.
In accordance with an embodiment of the present disclosure, can solve application developer at least in part in the task required for selection When, it is necessary to labor management and call underlying resource, the problem of higher selection and learning cost can be faced, and therefore can support The management of multiple-task automatic resource and calling, reduce development cost.
Brief description of the drawings
By the description to the embodiment of the present disclosure referring to the drawings, the above-mentioned and other purposes of the disclosure, feature and Advantage will be apparent from, in the accompanying drawings:
Fig. 1 diagrammatically illustrates the system tray according to the task processing method of the embodiment of the present disclosure, system and electronic equipment Structure;
Fig. 2 diagrammatically illustrates the flow chart of the task processing method according to the embodiment of the present disclosure;
Fig. 3 diagrammatically illustrates the flow chart of the task processing method according to another embodiment of the disclosure;
Fig. 4 diagrammatically illustrates the block diagram of the task processing system according to the embodiment of the present disclosure;
Fig. 5 diagrammatically illustrates the block diagram of the task processing system according to another embodiment of the disclosure;
Fig. 6 diagrammatically illustrates the block diagram of the pretreatment module according to the embodiment of the present disclosure;
Fig. 7 diagrammatically illustrates the block diagram of the acquisition module according to the embodiment of the present disclosure;And
Fig. 8 diagrammatically illustrates the calculating for being adapted for carrying out task processing method and/or system according to the embodiment of the present disclosure The block diagram of machine system.
Embodiment
Hereinafter, it will be described with reference to the accompanying drawings embodiment of the disclosure.However, it should be understood that these descriptions are simply exemplary , and it is not intended to limit the scope of the present disclosure.In addition, in the following description, the description to known features and technology is eliminated, with Avoid unnecessarily obscuring the concept of the disclosure.
Term as used herein is not intended to limit the disclosure just for the sake of description specific embodiment.Used here as Word " one ", " one (kind) " and "the" etc. should also include " multiple ", the meaning of " a variety of ", unless context clearly refers in addition Go out.In addition, term " comprising " as used herein, "comprising" etc. indicate the presence of the feature, step, operation and/or part, But it is not excluded that in the presence of or other one or more features of addition, step, operation or parts.
All terms (including technology and scientific terminology) as used herein have what those skilled in the art were generally understood Implication, unless otherwise defined.It should be noted that term used herein should be interpreted that with consistent with the context of this specification Implication, without should by idealization or it is excessively mechanical in a manner of explain.
, in general should be according to this using in the case of being similar to that " in A, B and C etc. at least one " is such and stating Art personnel are generally understood that the implication of the statement to make an explanation (for example, " having system at least one in A, B and C " Should include but is not limited to individually with A, individually with B, individually with C, with A and B, with A and C, with B and C, and/or System with A, B, C etc.).Using in the case of being similar to that " in A, B or C etc. at least one " is such and stating, it is general come Say be generally understood that the implication of the statement to make an explanation (for example, " having in A, B or C at least according to those skilled in the art The system of one " should include but is not limited to individually with A, individually with B, individually with C, with A and B, with A and C, with B and C, and/or system etc. with A, B, C).It should also be understood by those skilled in the art that substantially arbitrarily represent two or more The adversative conjunction and/or phrase of optional project, either in specification, claims or accompanying drawing, shall be construed as Give including one of these projects, the possibility of these projects either one or two projects.For example, " A or B " should for phrase It is understood to include " A " or " B " or " A and B " possibility.
Embodiment of the disclosure provides a kind of method, system and electronic equipment for task processing.This method can be with It is embodied as Distributed Computing Platform, Distributed Computing Platform is a kind of bottom clothes for supporting application program in distributed execution thereon Business platform.This method includes the task type and task parameters of acquisition task, according to the task type, calls resource object mould Plate, and the resource object template is updated according to the task parameters, according to the resource object template after the renewal, generation is extremely A few resource object, and according to the resource object, the resource for asking the required by task to be wanted.The embodiment of the present disclosure provides Method can support the Distributed Calculation of multiple-task so that developer need not be concerned about scheduling and the pipe of underlying resource Reason, reduces development cost.For example, the method that the embodiment of the present disclosure is provided is applied to machine learning field, machine can be made Learning platform can both support a variety of deep learning frameworks, distributed model can be supported to train again.Developer can utilize The platform completes the exploitation in the machine learning fields such as image recognition, speech recognition, natural language understanding, weather forecasting.
Fig. 1 diagrammatically illustrate according to the embodiment of the present disclosure can be with the exemplary system architecture of application task processing method 100。
As shown in figure 1, terminal device 101,102,103, network can be included according to the system architecture 100 of the embodiment 104 and server 105.Network 104 is to the offer communication link between terminal device 101,102,103 and server 105 Medium.Network 104 can include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications, such as the application of shopping class, net can be installed on terminal device 101,102,103 (merely illustrative) such as the application of page browsing device, searching class application, JICQ, mailbox client, social platform softwares.
Terminal device 101,102,103 can have a display screen and a various electronic equipments that supported web page browses, bag Include but be not limited to smart mobile phone, tablet personal computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as utilize terminal device 101,102,103 to user The website browsed provides the back-stage management server (merely illustrative) supported.Back-stage management server can be to the use that receives The data such as family request analyze etc. processing, and by result (such as according to user's acquisition request or the webpage of generation, believe Breath or data etc.) feed back to terminal device.
It should be noted that the task processing method that the embodiment of the present disclosure is provided can typically be performed by server 105. Correspondingly, the Task Processing Unit that the embodiment of the present disclosure is provided can be typically arranged in server 105.The embodiment of the present disclosure The task processing method provided can also by different from server 105 and can with terminal device 101,102,103 and/or clothes The server or server cluster that business device 105 communicates perform.Correspondingly, the Task Processing Unit that the embodiment of the present disclosure is provided It can be arranged at different from server 105 and the service that can be communicated with terminal device 101,102,103 and/or server 105 In device or server cluster.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realizing need Will, can have any number of terminal device, network and server.
Fig. 2 diagrammatically illustrates the flow chart of the task processing method according to the embodiment of the present disclosure.
As shown in Fig. 2 this method is included in operation S210~S240.
S210 is being operated, is obtaining the task type and task parameters of task.
In operation S220, according to the task type, resource object template is called, and institute is updated according to the task parameters State resource object template.
In operation S230, according to the resource object template after the renewal, at least one resource object is generated.
In operation S240, according to the resource object, the resource for asking the required by task to be wanted.
This method can support multiple-task by resource object template so that developer need not be concerned about that bottom provides The scheduling and management in source, reduce development cost.
In accordance with an embodiment of the present disclosure, in operation S210, obtain the task type of task and task parameters can be by with The inter-related task type and task parameters that family selects and inputted in console, for example, the deep learning framework used, start-up parameter Deng.Specifically, for example, user's selection is to carry out model training with TensorFlow (a kind of deep learning framework), then carry The task parameters of confession should include the quantity of parameter server, apply for CPU (central processing unit) quantity, GPU (graphics processor) Quantity, memory size, the carry path of storage dish is shared, starts order, start-up parameter etc..It is appreciated that according to different Training pattern, task parameters are different.
According to the embodiment of the present disclosure, it can also be the user-defined mirror received to obtain the task type of task and parameter Picture, the type and task parameters of being extracted from mirror image for task.For example, the username and password that user is generated at random by system Log in a container based on foundation image, the dependence bag of training script is installed manually as needed, running environment is set, can So that training script is uploaded into ad-hoc location, specified in order is started.System will using prior art after the completion of aforesaid operations Current container state saves as mirror image.This mirror image can be used in follow-up process using when creating the container of training mission.It is this The mode that User Defined makes mirror image meets the complicated dependence that can not be dealt carefully with the automatic preprocessing process of system and ring The problem of border is installed so that the applicability of this method is more preferable.
According to the embodiment of the present disclosure, in operation S220, according to above-mentioned task type, resource object template is called, and according to The task parameters update the resource object template.The resource object template is a mould with parameter by pre-setting Plate file, it is right in the value replacement resource object template for operating the task parameters described in 210 that the parameter in template file is used in The task parameters answered, one resource object template for being specifically used for above-mentioned task of generation.
For example, user, which have selected TensorFlow, carries out model training, and the quantity M of parameter server is provided, this is System calls the template file of the TensorFlow training patterns pre-set, the mould of the TensorFlow training patterns pre-set The quantity of parameter server is a in plate file, then by the parameter server in the template file of TensorFlow training patterns Quantity a with transmit come parameter M replacement.
It is at least one according to the resource object template after the renewal, generation in operation S230 according to the embodiment of the present disclosure Resource object.The resource object template of generation needs to produce the resource object of requirement by layout.The resource produced Object can for example start from scratch according to numbering carries out incremental index.According to the embodiment of the present disclosure, in operation S240, generation it is each Resource object can realize scheduling and the distribution of resource directly as the input of cluster resource management and running platform.
According to the embodiment of the present disclosure, the task can be distributed training mission, support distributed training mission, significantly Improve the training speed of deep learning model.The task type can perform to draw used in the distributed training Hold up, such as TensorFlow, Caffe, MXNet, this method supports a variety of enforcement engines, reduces moving for code in machine learning Move into this and learning cost.
Fig. 3 diagrammatically illustrates the flow chart of the task processing method according to another embodiment of the disclosure.
As shown in figure 3, this method is on the basis of embodiment illustrated in Figure 2, in addition to operation S310.
In operation S310, the task is pre-processed.
In accordance with an embodiment of the present disclosure, the task is pre-processed, can is according to the task parameters, parse institute The dependence program that needs simultaneously is installed.For example, according to files such as user-defined requirements or shell, obtain Required dependence bag name and version number, installation is downloaded from source or outer net.If script is more, can pack upload, and system will Script is unpacked, and startup order that user provides and parameter are passed into next module, is used during for starting distributed computing task.
In accordance with an embodiment of the present disclosure, the task is pre-processed or according to the task parameters, specification Change data to be dealt with.For example, the data of user's upload are handled as needed using the processing script of offer, by initial data It is converted into the data that script can be directly read.
In accordance with an embodiment of the present disclosure, the task is pre-processed, can also be optimization mission script.For example, can To be detected to mission script, miscue can be carried out when detecting mistake, or detect in mission script and exist During redundancy, it can be prompted or Automatic Optimal.This method simplifies workload during tasks carrying by pretreatment.
Fig. 4 diagrammatically illustrates the block diagram of the task processing system 400 according to the embodiment of the present disclosure.
As shown in figure 4, the task processing system 400 includes, acquisition module 410, template calling module 420, object generation Module 430 and resource request module 440.
Acquisition module 410, such as the operation S210 above with reference to Fig. 2 descriptions is performed, for obtaining the task type of task And task parameters.
Template calling module 420, such as the operation S220 above with reference to Fig. 2 descriptions is performed, for according to the task class Type, resource object template is called, and the resource object template is updated according to the task parameters.
Object generation module 430, such as the operation S230 above with reference to Fig. 2 descriptions is performed, after according to the renewal Resource object template, generate at least one resource object.
Resource request module 440, such as the operation S240 above with reference to Fig. 2 descriptions is performed, for according to the resource pair As the resource for asking the required by task to be wanted.
Fig. 5 diagrammatically illustrates the block diagram of the task processing system 500 according to another embodiment of the disclosure.
As shown in figure 5, the system is on the basis of embodiment illustrated in Figure 4, in addition to pretreatment module 510.
Pretreatment module 510, such as the operation S310 above with reference to Fig. 3 descriptions is performed, it is pre- for being carried out to the task Processing.
Fig. 6 diagrammatically illustrates the block diagram of the pretreatment module 510 according to the embodiment of the present disclosure.
As shown in fig. 6, the pretreatment module 510 includes, installation submodule 511, standardization submodule 512 and optimization submodule Block 513.
Submodule 511 is installed, for according to the task parameters, parsing required dependence program and being installed.
Standardize submodule 512, for according to the task parameters, data to be dealt with of standardizing.
Optimize submodule 513, for optimizing mission script.
It is understood that pretreatment module 510 can include above-mentioned installation submodule 511, standardization submodule 512 and Optimize one kind, any combination of two kinds or three kinds modules of submodule 513.
Fig. 7 diagrammatically illustrates the block diagram of the acquisition module 410 according to the embodiment of the present disclosure.
As shown in fig. 7, the acquisition module 410 includes extracting sub-module 411.
Extracting sub-module 411, for receiving user-defined mirror image, the task of the task is extracted from the mirror image Type and task parameters.
It is understood that acquisition module 410, template calling module 420, object generation module 430, resource request module 440th, pretreatment module 510, installation submodule 511, standardization submodule 512, optimization submodule 513 and extracting sub-module 411 may be incorporated in a module and realize, or any one module therein can be split into multiple modules.Or At least part function of one or more of these modules module can be combined with least part function phase of other modules, and Realized in a module.According to an embodiment of the invention, at least one in above-mentioned module can at least be at least partially implemented To be on hardware circuit, such as field programmable gate array (FPGA), programmable logic array (PLA), on-chip system, substrate System, the system in encapsulation, application specific integrated circuit (ASIC), or can be to carry out any other conjunction that is integrated or encapsulating to circuit The hardware such as reason mode or firmware realize, or is realized with software, the appropriately combined of three kinds of implementations of hardware and firmware.Or Person, at least one in above-mentioned module can at least be implemented partly as computer program module, when the program is by computer During operation, the function of corresponding module can be performed.
Fig. 8 diagrammatically illustrates the side of the computer system for being adapted for carrying out task processing method according to the embodiment of the present disclosure Block diagram.
Computer system shown in Fig. 8 is only an example, should not be to the function and use range of the embodiment of the present disclosure Bring any restrictions.
As shown in figure 8, including processor 801 according to the computer system 800 of the embodiment of the present disclosure, it can be according to storage Program in read-only storage (ROM) 802 is loaded into random access storage device (RAM) 803 from storage part 808 Program and perform various appropriate actions and processing.Processor 801 can for example include general purpose microprocessor (such as CPU), refer to Make set processor and/or related chip group and/or special microprocessor (for example, application specific integrated circuit (ASIC)), etc..Processing Device 810 can also include being used for the onboard storage device for caching purposes.Processor 810 can include being used to perform with reference to figure 2 or Fig. 3 Single treatment unit either multiple processing units of the different actions of the method flow according to the embodiment of the present disclosure of description.
In RAM 803, it is stored with system 800 and operates required various programs and data.Processor 801, ROM 802 with And RAM 803 is connected with each other by bus 804.Processor 801 is held by performing the program in ROM 802 and/or RAM 803 Various operations of the row above with reference to Fig. 2 or Fig. 3 methods according to the embodiment of the present disclosure described.It is noted that described program It can be stored in one or more memories in addition to ROM 802 and RAM 803.Processor 801 can also pass through execution It is stored in the program in one or more of memories and is implemented to perform above with reference to Fig. 2 or Fig. 3 descriptions according to the disclosure The various operations of the method for example.
In accordance with an embodiment of the present disclosure, system 800 can also include input/output (I/O) interface 805, input/output (I/O) interface 805 is also connected to bus 804.System 800 can also include be connected to I/O interfaces 805 with one in lower component Item is multinomial:Importation 806 including keyboard, mouse etc.;Including such as cathode-ray tube (CRT), liquid crystal display (LCD) Deng and loudspeaker etc. output par, c 807;Storage part 808 including hard disk etc.;And including such as LAN card, modulatedemodulate Adjust the communications portion 809 of the NIC of device etc..Communications portion 809 performs communication process via the network of such as internet. Driver 610 is also according to needing to be connected to I/O interfaces 805.Detachable media 611, such as disk, CD, magneto-optic disk, semiconductor Memory etc., it is arranged on as needed on driver 610, in order to which the computer program read from it is pacified as needed Load storage part 808.
In accordance with an embodiment of the present disclosure, it may be implemented as computer software journey above with reference to the method for flow chart description Sequence.For example, embodiment of the disclosure includes a kind of computer program product, it includes carrying meter on a computer-readable medium Calculation machine program, the computer program include the program code for being used for the method shown in execution flow chart.In such embodiments, The computer program can be downloaded and installed by communications portion 809 from network, and/or be pacified from detachable media 611 Dress.When the computer program is performed by processor 801, the above-mentioned function of being limited in the system of the embodiment of the present disclosure is performed.Root According to embodiment of the disclosure, system as described above, unit, module, unit etc. can by computer program module come Realize.
It should be noted that the computer-readable medium shown in the disclosure can be computer-readable signal media or meter Calculation machine readable storage medium storing program for executing either the two any combination.Computer-readable recording medium for example can be --- but not Be limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.Meter The more specifically example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more wires, just Take formula computer disk, hard disk, random access storage device (RAM), read-only storage (ROM), erasable type and may be programmed read-only storage Device (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device, Or above-mentioned any appropriate combination.In the disclosure, computer-readable recording medium can any include or store journey The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this In open, computer-readable signal media can be included in a base band or the data-signal as carrier wave part propagation, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium beyond storage medium is read, the computer-readable medium, which can send, propagates or transmit, to be used for By instruction execution system, device either device use or program in connection.Included on computer-readable medium Program code can be transmitted with any appropriate medium, be included but is not limited to:Wirelessly, electric wire, optical cable, RF etc., or it is above-mentioned Any appropriate combination.In accordance with an embodiment of the present disclosure, computer-readable medium can include above-described ROM 802 And/or one or more memories beyond RAM 803 and/or ROM 802 and RAM 803.
Flow chart and block diagram in accompanying drawing, it is illustrated that according to the system of the various embodiments of the disclosure, method and computer journey Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation The part of one module of table, program segment or code, a part for above-mentioned module, program segment or code include one or more For realizing the executable instruction of defined logic function.It should also be noted that some as replace realization in, institute in square frame The function of mark can also be with different from the order marked in accompanying drawing generation.For example, two square frames succeedingly represented are actual On can perform substantially in parallel, they can also be performed in the opposite order sometimes, and this is depending on involved function.Also It is noted that the combination of each square frame and block diagram in block diagram or flow chart or the square frame in flow chart, can use and perform rule Fixed function or the special hardware based system of operation are realized, or can use the group of specialized hardware and computer instruction Close to realize.
As on the other hand, the disclosure additionally provides a kind of computer-readable medium, and the computer-readable medium can be Included in equipment described in above-described embodiment;Can also be individualism, and without be incorporated the equipment in.Above-mentioned calculating Machine computer-readable recording medium carries one or more program, when said one or multiple programs are performed by the equipment, makes Obtain the equipment and perform the method according to the embodiment of the present disclosure described with reference to figure 2 or Fig. 3.
Embodiment of the disclosure is described above.But the purpose that these embodiments are merely to illustrate that, and It is not intended to limit the scope of the present disclosure.Although respectively describing each embodiment more than, but it is not intended that each reality Use can not be advantageously combined by applying the measure in example.The scope of the present disclosure is defined by the appended claims and the equivalents thereof.Do not take off From the scope of the present disclosure, those skilled in the art can make a variety of alternatives and modifications, and these alternatives and modifications should all fall at this Within scope of disclosure.

Claims (12)

1. a kind of task processing method, including:
The task type and task parameters of acquisition task;
According to the task type, resource object template is called, and the resource object template is updated according to the task parameters;
According to the resource object template after the renewal, at least one resource object is generated;And
According to the resource object, the resource for asking the required by task to be wanted.
2. the method according to claim 11, wherein, after the task type of the acquisition task and task parameters, institute Stating method also includes:
The task is pre-processed.
3. according to the method for claim 2, wherein, it is described to the task carry out pretreatment include following at least one:
According to the task parameters, parse required dependence program and installed;
According to the task parameters, data to be dealt with of standardizing;Or
Optimize mission script.
4. according to the method for claim 1, wherein, the task type and task parameters of the acquisition task include:
User-defined mirror image is received, the task type and task parameters of the task are extracted from the mirror image.
5. the method according to claim 11, wherein:
The task includes distributed training mission;And
The task type includes enforcement engine used in the distributed training.
6. a kind of task processing system, including:
Acquisition module, for obtaining the task type and task parameters of task;
Template calling module, for according to the task type, calling resource object template, and update according to the task parameters The resource object template;
Object generation module, for according to the resource object template after the renewal, generating at least one resource object;And
Resource request module, for according to the resource object, the resource for asking the required by task to be wanted.
7. system according to claim 1, in addition to:
Pretreatment module, for being pre-processed to the task.
8. system according to claim 7, wherein, the pretreatment module includes following at least one:
Submodule is installed, for according to the task parameters, parsing required dependence program and being installed;
Standardize submodule, for according to the task parameters, data to be dealt with of standardizing;Or
Optimize submodule, for optimizing mission script.
9. system according to claim 6, wherein, the acquisition module includes:
Extracting sub-module, for receiving user-defined mirror image, extracted from the mirror image task task type and Task parameters.
10. system according to claim 6, wherein:
The task includes distributed training mission;And
The task type includes enforcement engine used in the distributed training.
11. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
Wherein, when one or more of programs are by one or more of computing devices so that one or more of Method of the computing device as described in any one in Claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with executable instruction, the instruction holds processor when being executed by processor Method of the row as described in any one in Claims 1 to 5.
CN201710951710.6A 2017-10-12 2017-10-12 Task processing method, system and electronic equipment Pending CN107844371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710951710.6A CN107844371A (en) 2017-10-12 2017-10-12 Task processing method, system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710951710.6A CN107844371A (en) 2017-10-12 2017-10-12 Task processing method, system and electronic equipment

Publications (1)

Publication Number Publication Date
CN107844371A true CN107844371A (en) 2018-03-27

Family

ID=61662325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710951710.6A Pending CN107844371A (en) 2017-10-12 2017-10-12 Task processing method, system and electronic equipment

Country Status (1)

Country Link
CN (1) CN107844371A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764605A (en) * 2018-04-04 2018-11-06 北京潘达互娱科技有限公司 A kind of task dissemination method, device, electronic equipment and storage medium
CN109471718A (en) * 2018-10-12 2019-03-15 平安科技(深圳)有限公司 Computing resource configuration method, device, equipment and medium based on recognition of face
CN109739514A (en) * 2018-12-21 2019-05-10 北京中科寒武纪科技有限公司 Parameter processing method and Related product
CN109857475A (en) * 2018-12-27 2019-06-07 深圳云天励飞技术有限公司 A kind of method and device of frame management
CN110647996A (en) * 2018-06-08 2020-01-03 上海寒武纪信息科技有限公司 Execution method and device of universal machine learning model and storage medium
CN110689134A (en) * 2018-07-05 2020-01-14 第四范式(北京)技术有限公司 Method, apparatus, device and storage medium for performing machine learning process
CN111105006A (en) * 2018-10-26 2020-05-05 杭州海康威视数字技术股份有限公司 Deep learning network training system and method
CN111488211A (en) * 2020-04-09 2020-08-04 北京嘀嘀无限科技发展有限公司 Task processing method, device, equipment and medium based on deep learning framework
CN112288344A (en) * 2019-07-24 2021-01-29 北京京东乾石科技有限公司 Scheduling task data processing method, device, equipment and storage medium
US11036480B2 (en) 2018-06-08 2021-06-15 Shanghai Cambricon Information Technology Co., Ltd. General machine learning model, and model file generation and parsing method
US11699073B2 (en) 2018-12-29 2023-07-11 Cambricon Technologies Corporation Limited Network off-line model processing method, artificial intelligence processing device and related products

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889047A (en) * 2005-06-27 2007-01-03 腾讯科技(深圳)有限公司 System and method for realizing program resource sharing
CN101821709A (en) * 2007-09-11 2010-09-01 西安姆贝拉有限公司 System, method and graphical user interface for workflow generation, deployment and/or execution
CN102096596A (en) * 2010-11-29 2011-06-15 华中科技大学 Cloud computing service Cache system based on internal memory template of virtual machine
CN105007323A (en) * 2015-07-22 2015-10-28 上海斐讯数据通信技术有限公司 System and method for arranging cloud resources
CN105183561A (en) * 2015-09-02 2015-12-23 浪潮(北京)电子信息产业有限公司 Resource distribution method and resource distribution system
CN105577779A (en) * 2015-12-21 2016-05-11 用友网络科技股份有限公司 Method and system for containerized deployment of large enterprise private cloud
US20160364262A1 (en) * 2015-06-10 2016-12-15 Tata Consultancy Services Limited System and method for generating service operation implementation
CN106354563A (en) * 2016-08-29 2017-01-25 广州市香港科大***研究院 Distributed computing system for 3D (three-dimensional reconstruction) and 3D reconstruction method
CN106875152A (en) * 2016-12-16 2017-06-20 新华三技术有限公司 A kind of task creation method and device
CN107111519A (en) * 2014-11-11 2017-08-29 亚马逊技术股份有限公司 For managing the system with scheduling container
CN107203424A (en) * 2017-04-17 2017-09-26 北京奇虎科技有限公司 A kind of method and apparatus that deep learning operation is dispatched in distributed type assemblies

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889047A (en) * 2005-06-27 2007-01-03 腾讯科技(深圳)有限公司 System and method for realizing program resource sharing
CN101821709A (en) * 2007-09-11 2010-09-01 西安姆贝拉有限公司 System, method and graphical user interface for workflow generation, deployment and/or execution
CN102096596A (en) * 2010-11-29 2011-06-15 华中科技大学 Cloud computing service Cache system based on internal memory template of virtual machine
CN107111519A (en) * 2014-11-11 2017-08-29 亚马逊技术股份有限公司 For managing the system with scheduling container
US20160364262A1 (en) * 2015-06-10 2016-12-15 Tata Consultancy Services Limited System and method for generating service operation implementation
CN105007323A (en) * 2015-07-22 2015-10-28 上海斐讯数据通信技术有限公司 System and method for arranging cloud resources
CN105183561A (en) * 2015-09-02 2015-12-23 浪潮(北京)电子信息产业有限公司 Resource distribution method and resource distribution system
CN105577779A (en) * 2015-12-21 2016-05-11 用友网络科技股份有限公司 Method and system for containerized deployment of large enterprise private cloud
CN106354563A (en) * 2016-08-29 2017-01-25 广州市香港科大***研究院 Distributed computing system for 3D (three-dimensional reconstruction) and 3D reconstruction method
CN106875152A (en) * 2016-12-16 2017-06-20 新华三技术有限公司 A kind of task creation method and device
CN107203424A (en) * 2017-04-17 2017-09-26 北京奇虎科技有限公司 A kind of method and apparatus that deep learning operation is dispatched in distributed type assemblies

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
乔玮: "分布式深度学习***的容器化调度机制的研究", 《PKUFINELAB》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764605A (en) * 2018-04-04 2018-11-06 北京潘达互娱科技有限公司 A kind of task dissemination method, device, electronic equipment and storage medium
US11036480B2 (en) 2018-06-08 2021-06-15 Shanghai Cambricon Information Technology Co., Ltd. General machine learning model, and model file generation and parsing method
US11726754B2 (en) 2018-06-08 2023-08-15 Shanghai Cambricon Information Technology Co., Ltd. General machine learning model, and model file generation and parsing method
US11403080B2 (en) 2018-06-08 2022-08-02 Shanghai Cambricon Information Technology Co., Ltd. General machine learning model, and model file generation and parsing method
US11379199B2 (en) 2018-06-08 2022-07-05 Shanghai Cambricon Information Technology Co., Ltd. General machine learning model, and model file generation and parsing method
CN110647996A (en) * 2018-06-08 2020-01-03 上海寒武纪信息科技有限公司 Execution method and device of universal machine learning model and storage medium
US11334330B2 (en) 2018-06-08 2022-05-17 Shanghai Cambricon Information Technology Co., Ltd. General machine learning model, and model file generation and parsing method
US11334329B2 (en) 2018-06-08 2022-05-17 Shanghai Cambricon Information Technology Co., Ltd. General machine learning model, and model file generation and parsing method
US11307836B2 (en) 2018-06-08 2022-04-19 Shanghai Cambricon Information Technology Co., Ltd. General machine learning model, and model file generation and parsing method
CN110689134A (en) * 2018-07-05 2020-01-14 第四范式(北京)技术有限公司 Method, apparatus, device and storage medium for performing machine learning process
CN109471718A (en) * 2018-10-12 2019-03-15 平安科技(深圳)有限公司 Computing resource configuration method, device, equipment and medium based on recognition of face
CN109471718B (en) * 2018-10-12 2023-09-22 平安科技(深圳)有限公司 Computing resource configuration method, device, equipment and medium based on face recognition
CN111105006A (en) * 2018-10-26 2020-05-05 杭州海康威视数字技术股份有限公司 Deep learning network training system and method
CN111105006B (en) * 2018-10-26 2023-08-04 杭州海康威视数字技术股份有限公司 Deep learning network training system and method
CN109739514B (en) * 2018-12-21 2021-03-02 中科寒武纪科技股份有限公司 Parameter processing method and related product
CN109739514A (en) * 2018-12-21 2019-05-10 北京中科寒武纪科技有限公司 Parameter processing method and Related product
CN109857475B (en) * 2018-12-27 2020-06-16 深圳云天励飞技术有限公司 Framework management method and device
CN109857475A (en) * 2018-12-27 2019-06-07 深圳云天励飞技术有限公司 A kind of method and device of frame management
US11699073B2 (en) 2018-12-29 2023-07-11 Cambricon Technologies Corporation Limited Network off-line model processing method, artificial intelligence processing device and related products
CN112288344A (en) * 2019-07-24 2021-01-29 北京京东乾石科技有限公司 Scheduling task data processing method, device, equipment and storage medium
CN111488211A (en) * 2020-04-09 2020-08-04 北京嘀嘀无限科技发展有限公司 Task processing method, device, equipment and medium based on deep learning framework

Similar Documents

Publication Publication Date Title
CN107844371A (en) Task processing method, system and electronic equipment
KR102342604B1 (en) Method and apparatus for generating neural network
US11755316B2 (en) Customizable cloud-based software platform
CN107451109A (en) Report form generation method and system
CN108804327A (en) A kind of method and apparatus of automatic Data Generation Test
CN107590186A (en) Management and the method and policy engine system for performing data processing policy
CN107844324A (en) Customer terminal webpage redirects treating method and apparatus
CN109981719A (en) Information processing method and its system, computer system and computer readable medium
CN107656768A (en) Control the method and its system of page jump
CN106896937A (en) Method and apparatus for being input into information
CN114911465B (en) Method, device and equipment for generating operator and storage medium
CN110400201A (en) Information displaying method, device, electronic equipment and medium
CN107656911A (en) Form processing method and its system
CN109241033A (en) The method and apparatus for creating real-time data warehouse
CN107908662A (en) The implementation method and realization device of search system
CN108958826A (en) The method and apparatus of dynamic configuration application installation package
CN107305528A (en) Application testing method and device
CN107515947A (en) picture loading method and its system
CN109597912A (en) Method for handling picture
CN111580883B (en) Application program starting method, device, computer system and medium
CN109981546A (en) The method and apparatus for obtaining the far call relationship between application module
US11438403B2 (en) Page presentation method and system, computer system, and computer readable medium
CN110618811A (en) Information presentation method and device
CN109933727A (en) User's portrait generation method and system, user's portrait application method and system
CN112181408A (en) Method and device for displaying view list on front page of application program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180327

RJ01 Rejection of invention patent application after publication