CN115981872A - Method and device for calling algorithm resources, electronic equipment and storage medium - Google Patents

Method and device for calling algorithm resources, electronic equipment and storage medium Download PDF

Info

Publication number
CN115981872A
CN115981872A CN202310263804.XA CN202310263804A CN115981872A CN 115981872 A CN115981872 A CN 115981872A CN 202310263804 A CN202310263804 A CN 202310263804A CN 115981872 A CN115981872 A CN 115981872A
Authority
CN
China
Prior art keywords
platform
resource
connector
target
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310263804.XA
Other languages
Chinese (zh)
Other versions
CN115981872B (en
Inventor
苑辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310263804.XA priority Critical patent/CN115981872B/en
Publication of CN115981872A publication Critical patent/CN115981872A/en
Application granted granted Critical
Publication of CN115981872B publication Critical patent/CN115981872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Stored Programmes (AREA)

Abstract

The disclosure provides a calling method and device of algorithm resources, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to cloud computing and artificial intelligence infrastructure technology. The specific implementation scheme is as follows: in response to receiving a first call request for a target scheduler from an algorithm platform, determining a target resource platform among at least one resource platform to which the target scheduler is connected; each resource platform in the at least one resource platform is used for processing an execution node in a resource scheduling flow corresponding to the target scheduler; sending a second calling request to the target resource platform; the second call request is used for indicating the target resource platform to process the execution node to obtain a corresponding node processing result; receiving a node processing result from a target resource platform, and obtaining a calling result of a resource scheduling flow based on the node processing result; and sending the calling result to the algorithm platform. The method and the device can improve the utilization rate of algorithm resources.

Description

Method and device for calling algorithm resources, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and in particular, to cloud computing and artificial intelligence infrastructure technology.
Background
In recent years, the performance of AI (Artificial Intelligence) algorithms is continuously getting a breakthrough. At the same time, the algorithm scale is also increasing. Accordingly, the demand for algorithmic resources (e.g., data, computing power, etc.) is also increasing. However, there is a bottleneck in the expansion of algorithm resources, and how to improve the utilization rate of underlying resources becomes a hot point problem in the AI field.
Disclosure of Invention
The disclosure provides a calling method and device of algorithm resources, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a method for calling an algorithm resource, including:
in response to receiving a first call request for a target scheduler from an algorithm platform, determining a target resource platform among at least one resource platform to which the target scheduler is connected; each resource platform in the at least one resource platform is used for processing an execution node in a resource scheduling flow corresponding to the target scheduler;
sending a second calling request to the target resource platform; the second call request is used for indicating the target resource platform to process the execution node to obtain a corresponding node processing result;
receiving a node processing result from a target resource platform, and obtaining a calling result of a resource scheduling flow based on the node processing result;
and sending the calling result to the algorithm platform.
According to another aspect of the present disclosure, there is provided an apparatus for invoking algorithm resources, including:
the request processing module is used for responding to a first calling request aiming at the target dispatcher and received from the algorithm platform, and determining a target resource platform in at least one resource platform connected with the target dispatcher; each resource platform in the at least one resource platform is used for processing an execution node in a resource scheduling flow corresponding to the target scheduler;
the first sending module is used for sending a second calling request to the target resource platform; the second call request is used for indicating the target resource platform to process the execution node to obtain a corresponding node processing result;
the first receiving module is used for receiving the node processing result from the target resource platform and obtaining the calling result of the resource scheduling flow based on the node processing result;
and the second sending module is used for sending the calling result to the algorithm platform.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of the embodiments of the present disclosure.
According to the technical scheme of the embodiment of the disclosure, the cross-platform resource scheduling capability of the target scheduler can be realized through the arrangement of the resource scheduling stream and the access of at least one resource platform. In this manner, the algorithm platform may implement resource invocation by invoking the target scheduler. Through the access and centralized scheduling of cross-platform resources, the utilization rate of algorithm resources can be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flowchart of a method for calling algorithm resources according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a service interface and a connector of a scheduler in an example application of the present disclosure;
FIG. 3 is a schematic diagram of an implementation flow of a scheduler in another application example of the present disclosure;
FIG. 4 is a flow chart of a calling method of an algorithm resource in another application example of the disclosure;
FIG. 5 is a flow diagram of an algorithmic computational cross-platform mirror synchronization service in an application example;
FIG. 6 is a flow diagram of algorithmic power across platform deployment services in an application example;
FIG. 7 is a schematic block diagram of a means for invoking algorithm resources in one embodiment of the present disclosure;
FIG. 8 is a schematic block diagram of a calling device of algorithm resources in another embodiment of the present disclosure;
FIG. 9 is a block diagram of an electronic device for implementing a calling method of an algorithm resource of an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The term "at least one" herein means any combination of any one or more of a plurality, for example, including at least one of a, B, C, and may mean including any one or more elements selected from the group consisting of a, B, and C. The terms "first" and "second" used herein refer to and distinguish one from another in the similar art, without necessarily implying a sequence or order, or implying only two, such as first and second, to indicate that there are two types/two, first and second, and first and second may also be one or more.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a method for calling an algorithm resource according to an embodiment of the present disclosure. The method may be implemented based on a bus, wherein the bus may refer to a service module for providing uniform adaptation and interfacing of different platforms, and the bus may also be referred to as a smart bus. Illustratively, the bus may be integrated within a terminal, server cluster, or other processing device. In the disclosed embodiment, at least one scheduler is deployed on the bus, different schedulers may correspond to different resource scheduling flows, and the bus may receive and process invocation requests for either scheduler. As shown in fig. 1, the method may include:
s110, in response to receiving a first calling request aiming at a target scheduler from an algorithm platform, determining a target resource platform in at least one resource platform connected with the target scheduler; each resource platform in the at least one resource platform is used for processing an execution node in the resource scheduling flow corresponding to the target scheduler.
S120, sending a second calling request to the target resource platform; and the second call request is used for indicating the target resource platform to process the execution node to obtain a corresponding node processing result.
S130, receiving a node processing result from the target resource platform, and obtaining a calling result of the resource scheduling flow based on the node processing result.
And S140, sending a calling result to the algorithm platform.
Illustratively, the algorithm platform may include a server, a cluster of servers, or the like for providing algorithm services. The resource platform may include a server, a cluster of servers, etc. for providing algorithmic resource services. The algorithm resources may include data, computing power, and the like, that is, the resource platform may include a data platform, a computing power platform, and the like.
Illustratively, a resource scheduling flow may refer to the flow that an algorithmic platform needs to complete to invoke an algorithmic resource. The resource scheduling flow may include one or more executing nodes. Taking the example that the algorithm resource includes computing power, the resource scheduling flow for the algorithm platform to call the computing power resource may include a plurality of execution nodes such as mirror image verification, mirror image pull, mirror image state query, workload verification, workload creation, and the like. Different resource scheduling flows include different executing nodes.
Optionally, different resource scheduling flows may be predetermined, and a corresponding scheduler is configured according to the resource scheduling flows, so that when an algorithm platform needs to execute a certain resource scheduling flow, a call request for a scheduler (i.e., a target scheduler) corresponding to the resource scheduling flow may be initiated to a bus, so that the bus calls the resource platform to execute the resource scheduling flow.
Alternatively, the bus may access different service types or resource platforms provided by different vendors. Illustratively, for an executing node in a resource scheduling flow, the bus may have access to multiple resource platforms for handling the executing node, e.g., resource platforms of different vendors that provide the same service. When a calling request aiming at a target scheduler is received and the execution of a resource scheduling flow is triggered, a currently adopted resource platform (namely a target resource platform) is selected from all resource platforms through a bus, and the calling request is sent to the resource platform so as to obtain the processing result of the resource platform on an execution node, obtain the calling result of the resource scheduling flow based on the processing result of the resource platform, and return the calling result to an algorithm platform. The bus can select the target resource platform according to a preset load distribution strategy, and can also select the target resource platform according to the instruction of the algorithm platform.
It can be seen that according to the above algorithm resource calling method, the cross-platform resource scheduling capability of the target scheduler can be realized through the arrangement of the resource scheduling stream and the access of at least one resource platform. In this manner, the algorithm platform may implement resource invocation by invoking the target scheduler. Through the access and centralized scheduling of cross-platform resources, the utilization rate of algorithm resources can be improved.
In an exemplary embodiment, the target scheduler includes at least one connector corresponding to each of the at least one execution node in the resource scheduling flow; each connector in the at least one connector is connected with at least one resource platform, and the at least one resource platform is used for processing the execution node corresponding to the connector.
Illustratively, the target scheduler may be derived based on an orchestration combination of at least one connector. Specifically, each scheduler on the bus may be arranged and combined based on the connector. Where each connector corresponds to a different execution node, or each connector is used to implement a different function. Each connector can be connected with one or more resource platforms, different connectors can be connected with the same resource platform, and different resource platforms can also be connected, so that aiming at different execution nodes, the execution nodes can be processed by adopting the appropriate resource platforms.
Taking the example where the resource platform comprises a data platform, assuming that one algorithm schedule flow comprises an execution node calling a parameter and an execution node calling an image, the scheduler may comprise a first connector connecting multiple parameter platforms and a second connector connecting multiple image platforms. When the scheduler is called, different execution nodes can be triggered in sequence, when the execution node for calling the parameter is triggered, a target parameter platform is selected from a plurality of parameter platforms connected with the first connector, and the target parameter platform is called through the first connector to obtain a parameter node processing result; when an execution node for calling the image is triggered, a target image platform is selected from the plurality of image platforms connected with the second connector, and the target image platform is called through the second connector to obtain an image node processing result.
Therefore, based on the data connector accessed by the bus, the bus provides a corresponding scheduler, and the acquisition of full-mode data required by a full-scene algorithm can be realized. The method is beneficial to help the algorithm application to realize the business series connection between the cross-multi-source algorithm engine and the multi-heterogeneous data middle platform service.
It can be understood that, based on the computing force connector accessed by the bus, the bus provides a corresponding scheduler, and computing force service calls of different computing force service types required by the mainstream algorithm, such as a general computing force service, a heterogeneous AI computing force service, an edge computing force service, an end computing force service, and the like, can be realized, so that the algorithm service of the multi-algorithm platform can be uniformly deployed, uniformly monitored, and uniformly scheduled on the multi-computing force platform.
According to the above exemplary embodiment, a scheduler is configured by using connectors corresponding to the execution nodes, and each connector is connected to a resource platform corresponding to the execution node. Therefore, the bus can build a bridge between the algorithm platform side and multiple resource platforms providing different services, access of multiple resources is completed for the algorithm platform side, resource reuse is facilitated, the resource reuse rate of a specific application scene is improved, and the resource utilization rate is also improved.
Alternatively, each resource platform may refer to various connectors pre-configured on the bus, and complete service access of a lower platform serving as the bus by registering on the bus. And configuring an adapter in the resource platform, wherein the adapter is in butt joint with the connector, so that service capability conversion between the resource service provided by the platform and the execution node corresponding to the connector is completed through the adapter. Optionally, the service capability conversion includes parameter mapping, merging, fusing, and the like. For example, the adapter is used for converting a call request issued by the connector into a personalized service call request conforming to the interface definition of the resource platform; and the processor is used for converting a return result which is output by the resource platform and accords with the resource platform interface definition into a node processing result which is sent to the connector. Illustratively, one connector may interface with one or more adapters. The adapters can be deployed in different resource platforms to enable multiple platforms to access the same execution node or the same service. Therefore, the service adaptation process of each supply platform is deployed in the supply platform, the organization is light, and the response efficiency of intelligent service can be improved.
In an exemplary embodiment, in the step S110, determining the target resource platform in the at least one resource platform may include: determining a currently called connector among the at least one connector based on the service interface receiving the first call request; and determining a target resource platform in at least one resource platform connected by the currently called connector.
Illustratively, the scheduler may provide a plurality of service interfaces, with different service interfaces corresponding to different connectors. In particular, one service interface may correspond to one or more connectors. The algorithm platform can initiate a call request through different service interfaces aiming at different service processes in the resource scheduling stream.
As an example, fig. 2 shows a schematic diagram of a service interface and a connector of a scheduler in one application example of the present disclosure. The scheduler is used for realizing a computing power scheduling flow, and can provide a plurality of service interfaces such as a mirror uploading interface 211, a mirror uploading state interface 212, a workload creating interface 213, a workload deleting interface 214, a workload state inquiring interface 215 and a monitoring inquiring interface 216 related to computing power services. The image uploading interface 211 may correspond to the image verification connector 221 and the image pull connector 222; the mirror upload state interface 212 may correspond to the mirror upload state connector 223; the workload creation interface 213 may correspond to a workload verification connector 224, an optimal resource connector 225, a create workload connector 226; the workload deletion interface 214 may correspond to a pre-deletion verification connector 227, a deletion workload connector 228; the workload status query interface 215 may correspond to the workload status query connector 229 and the monitoring query interface 216 may correspond to the monitoring query connector 230.
Optionally, in a case that the service interface corresponds to a plurality of connectors, the plurality of connectors may be sequentially called, so as to trigger each connector to complete the corresponding execution node. After determining the currently invoked connector, a target resource platform may be determined among at least one resource platform connected to the connector according to a preconfigured load distribution policy.
The above exemplary embodiment divides the resource scheduling flow into different interfaces, and the different interfaces correspond to different connectors, so that the algorithm platform can call the service of the corresponding interface according to the requirement, and the fine granularity of service call is improved, thereby being beneficial to realizing the multiplexing of resource service and improving the resource utilization rate.
Illustratively, among the at least one resource platform to which the currently invoked connector is connected, determining the target resource platform may include: and determining a target resource platform in at least one resource platform connected by the currently called connector based on the resource requirement information in the first calling request.
That is, the first invocation request may carry the resource requirement information. The resource requirement information may include information such as the number, size, and type of the resource. The bus can allocate a proper resource platform as a target resource platform according to the resource demand information. In some examples, the target resource platform may include multiple platforms. For example, for the computing power requirement, a plurality of resource platforms may be allocated to provide computing power respectively, and the processing result of the execution node is obtained by summarizing the processing result of each resource platform.
Fig. 3 shows an implementation flow of a scheduler in another application example of the present disclosure. As shown in fig. 3, when a call request to a scheduler is received through a service interface, the execution of a scheduling flow corresponding to the scheduler is triggered, specifically, the execution of node 1 and node 2 corresponding to the service interface is triggered. For the node 1, a target resource platform may be determined according to the resource demand information in the call request, where the target resource platform may include a resource platform 1 and a resource platform 2, for example, the resource platform 1 completes 40% of node tasks, and the resource platform 2 completes 60% of node tasks. Then, the resource platforms 1 and 2 are respectively called through the connector 1 corresponding to the node 1, and then the processing result of the resource platform 1 and the processing result of the resource platform 2 are summarized to obtain the processing result of the node 1, and the processing result is returned to the node 1. And then executes node 2 based on the processing result of node 1. Similarly to the processing manner of the node 1, for the node 2, it may be determined that the resource platform 1 and the resource platform 2 are target resource platforms, the resource platforms 1 and 2 are respectively called through the connector 2 corresponding to the node 2, and then the processing result of the resource platform 1 and the processing result of the resource platform 2 are summarized to obtain the processing result of the node 2, which is returned to the node 2. And finishing the resource calling, taking the processing result of the node 2 as a calling result, and returning the calling result.
According to the above example, the resource platform can be reasonably scheduled according to the resource requirement of the algorithm platform, so that the multiplexing effect of the resource service is improved, and the resource utilization rate is improved.
In an exemplary embodiment, the method for calling the algorithm resource may further include: determining a corresponding connector based on each executing node in the resource scheduling flow; and configuring a target scheduler based on the connector corresponding to each execution node.
This embodiment provides a way of configuring the scheduler to be executed before receiving the first call request from the algorithm platform.
According to this embodiment, a plurality of connectors may be configured in advance, then a connector may be selected according to the requirement of the resource scheduling flow, and the target scheduler may be configured based on the selected connector. Therefore, various resource platforms providing different services can be accessed in advance, the configuration of the connectors is completed firstly, the multiplexing of the connectors to different resource scheduling flows is facilitated, and the resource utilization rate is improved.
Illustratively, determining a corresponding connector based on each executing node in the resource scheduling flow comprises: determining at least one standardized service based on the service type of each resource platform accessing the bus; configuring at least one connector corresponding to at least one standardized service, respectively, based on the at least one standardized service; and selecting a connector corresponding to each execution node from the at least one connector based on the standardized service corresponding to each execution node in the resource scheduling flow.
This embodiment provides for the configuration and selection of connectors. Specifically, the service type may include a type of a service related to resource invocation, such as a load-related service, an instance-related service, a monitoring-related service, and the like under the computing platform; video streaming related services, structured file related services, etc. under the data platform. By aggregating the accessed service types, at least one standardized service that is different from each other can be abstracted, and thus a corresponding connector is configured for each standardized service. And when the scheduler needs to be configured, selecting a corresponding connector according to the standardized service corresponding to each node in the resource scheduling flow.
According to the embodiment, different resources provided by each resource platform can be fully utilized, the coverage of standardized services is improved, and therefore the resource utilization rate is improved.
In an exemplary embodiment, the method for calling the algorithm resource may further include: receiving subscription information for a target scheduler from an algorithm platform; determining a subscribed service interface among the at least one service interface of the target scheduler based on the subscription information; sending confirmation information aiming at the subscription information to an algorithm platform; and the confirmation information is used for indicating the algorithm platform to adopt the subscribed service interface as a resource calling interface.
Illustratively, the subscription information is used to request permission to invoke some or all of the service interfaces of the target scheduler. The subscription information carries the relevant information of the service interface, so that the bus can determine the subscribed service interface according to the subscription information and open corresponding authority for the algorithm platform. And under the condition that the algorithm platform receives the confirmation information, the algorithm platform can be confirmed to have the authority of calling the service interface, so that the service interface can be adopted as a resource calling interface.
Alternatively, the algorithm platform may perform interface modifications based on the validation information, such as changing the docking resource platform to a docking bus. In this manner, the resource invocation may be accomplished by initiating a call request to the scheduler to the bus.
To facilitate understanding of the above embodiments, fig. 4 shows a flowchart of a method for calling an algorithm resource in another application example of the present disclosure. The application example is to exemplify the configuration of the scheduler and the application process, taking the resource platform as a computing platform. As shown in fig. 4, the method comprises the steps of:
s401, registering the computing power platform on the bus.
S402, a computing force platform connector is configured on the bus.
And S403, registering the algorithm platform on the bus.
And S404, registering the algorithm platform connector on the bus.
S405, a scheduler is configured on the bus.
S406, an algorithm platform subscription scheduler.
And S407, registering scheduler subscription information on the bus to provide scheduler authority for the algorithm platform.
S408, the algorithm platform acquires the authority of the scheduler.
And S409, carrying out interface transformation on the algorithm platform, and changing the butt joint calculation force into a butt joint bus.
And S410, an algorithm platform provides an algorithm deployment request.
S411, executing the relevant connector based on the dispatcher on the bus to call the interface of the computing platform.
And S412, executing algorithm deployment by the computing power platform.
FIG. 5 shows a flow diagram of an algorithmic effort across platform mirror synchronization service based on a scheduler implementation in an application example. As shown in fig. 5, in the process of implementing the mirror synchronization service, the method may include:
s501, the algorithm platform declares service requirements to a dispatcher on the bus.
S502, the bus normal response algorithm platform, specifically, the returned status code 200 represents a normal response.
S503, the bus calls the algorithm platform connector to obtain an algorithm mirror image address list of the algorithm platform.
S504, the algorithm platform responds to the bus and returns a mirror image address list.
And S505, the bus calls a computing force platform mirror image connector to call a computing force platform to pull a mirror image.
S506, the bus calls the computing platform mirror image connector to inquire the operation state.
And S507, responding the working state by the computing force platform.
And S508, responding to the mirror image pulling result by the computing force platform.
FIG. 6 shows a flow diagram of the deployment of services across platforms based on algorithmic effort of the scheduler implementation in this application example. As shown in fig. 6, in the process of implementing algorithm deployment, the specific steps may include:
s601, the algorithm platform declares service requirements of cross-platform deployment to a dispatcher on the bus.
And S602, calling the asset connector of the computing platform by the bus, and inquiring the resource quota of the computing platform.
S603, the computing platform responds to the query of the resource quota.
And S604, acquiring a computing force platform mirror image through the bus.
S605, the bus calls a computing platform load connector to create a workload.
And S606, calling the computing platform by the bus to inquire the connector so as to inquire the deployment state.
S607, the computing force platform responds to the deployment state query.
And S608, responding to the load creating result by the computing force platform.
And S609, returning a deployment result to the algorithm platform by the bus.
It can be seen that in the above application example, the cross-platform computational scheduling capability can be realized through configuration of the connector, orchestration of resource scheduling streams, and access of at least one resource platform. Therefore, the utilization rate of algorithm resources can be improved through the access and centralized scheduling of cross-platform resources.
According to the embodiment of the disclosure, the disclosure also provides a calling device of the algorithm resource. Fig. 7 shows a schematic block diagram of a calling device of an algorithm resource in an embodiment of the present disclosure. As shown in fig. 7, the calling device may include:
a request processing module 710, configured to determine, in response to receiving a first call request for a target scheduler from an algorithm platform, a target resource platform among at least one resource platform to which the target scheduler is connected; each resource platform in the at least one resource platform is used for processing an execution node in a resource scheduling flow corresponding to the target scheduler;
a first sending module 720, configured to send a second call request to the target resource platform; the second call request is used for indicating the target resource platform to process the execution node to obtain a corresponding node processing result;
a first receiving module 730, configured to receive a node processing result from the target resource platform, and obtain a call result of the resource scheduling flow based on the node processing result;
and a second sending module 740, configured to send the call result to the algorithm platform.
Optionally, the target scheduler may include at least one connector corresponding to at least one execution node in the resource scheduling flow, respectively; each connector in the at least one connector is connected with at least one resource platform, and the at least one resource platform is used for processing the execution node corresponding to the connector.
Illustratively, the request processing module 710 is specifically configured to:
determining a currently called connector among the at least one connector based on the service interface receiving the first call request;
and determining a target resource platform in at least one resource platform connected by the currently called connector.
Illustratively, the request processing module 710 is specifically configured to:
and determining a target resource platform in at least one resource platform connected by the currently called connector based on the resource requirement in the first calling request.
Fig. 8 shows a schematic block diagram of a calling device of algorithm resources in another embodiment of the present disclosure. As shown in fig. 8, the invoking device may further include the features in the foregoing embodiments, and may further include a configuration module 810, where the configuration module 810 is configured to:
determining a corresponding connector based on each executing node in the resource scheduling flow;
and configuring a target scheduler based on the connector corresponding to each execution node.
Illustratively, the configuration module 810 is specifically configured to:
determining at least one standardized service based on the service types of the resource platforms accessing the bus;
configuring at least one connector corresponding to at least one standardized service, respectively, based on the at least one standardized service;
and selecting a connector corresponding to each execution node from the at least one connector based on the standardized service corresponding to each execution node in the resource scheduling flow.
Optionally, as shown in fig. 8, the invoking device may further include:
a second receiving module 820 for receiving subscription information for the target scheduler from the algorithm platform;
a subscription processing module 830, configured to determine a subscribed service interface among the at least one service interface of the target scheduler based on the subscription information;
a third sending module 840, configured to send confirmation information for the subscription information to the algorithm platform; and the confirmation information is used for indicating the algorithm platform to adopt the subscribed service interface as a resource calling interface.
For a description of specific functions and examples of each module and sub-module of the apparatus in the embodiment of the present disclosure, reference may be made to the description of corresponding steps in the foregoing method embodiments, and details are not repeated here.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 9 illustrates a schematic block diagram of an example electronic device 900 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901, which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The calculation unit 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.
A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 901 performs the respective methods and processes described above, such as calling methods of algorithm resources. For example, in some embodiments, the calling method of the algorithmic resource may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into RAM 903 and executed by computing unit 901, one or more steps of the calling method of the algorithm resource described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to execute the calling method of the algorithm resource by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (16)

1. A calling method of algorithm resources comprises the following steps:
in response to receiving a first call request for a target scheduler from an algorithm platform, determining a target resource platform among at least one resource platform to which the target scheduler is connected; wherein each resource platform of the at least one resource platform is configured to process an execution node in a resource scheduling flow corresponding to the target scheduler;
sending a second calling request to the target resource platform; the second call request is used for indicating the target resource platform to process the execution node to obtain a corresponding node processing result;
receiving the node processing result from the target resource platform, and obtaining a calling result of the resource scheduling flow based on the node processing result;
and sending the calling result to the algorithm platform.
2. The method of claim 1, wherein the target scheduler includes at least one connector corresponding to at least one execution node in the resource scheduling flow, respectively; each connector in the at least one connector is connected with at least one resource platform, and the at least one resource platform is used for processing an execution node corresponding to the connector.
3. The method of claim 2, wherein said determining a target resource platform among said at least one resource platform comprises:
determining a currently called connector among the at least one connector based on the service interface receiving the first call request;
and determining the target resource platform in at least one resource platform connected with the currently called connector.
4. The method of claim 3, wherein said determining the target resource platform among the at least one resource platform of the currently invoked connector connection comprises:
and determining the target resource platform in at least one resource platform connected by the currently called connector based on the resource requirement in the first calling request.
5. The method of any of claims 1-4, further comprising:
determining a corresponding connector based on each executing node in the resource scheduling flow;
and configuring the target scheduler based on the connector corresponding to each execution node.
6. The method of claim 5, wherein the determining a corresponding connector based on each executing node in the resource scheduling flow comprises:
determining at least one standardized service based on the service type of each resource platform accessing the bus;
configuring at least one connector corresponding to the at least one standardized service, respectively, based on the at least one standardized service;
and selecting a connector corresponding to each execution node from the at least one connector based on the standardized service corresponding to each execution node in the resource scheduling flow.
7. The method of any of claims 1-4, further comprising:
receiving subscription information for the target scheduler from the algorithm platform;
determining a subscribed service interface among the at least one service interface of the target scheduler based on the subscription information;
sending confirmation information for the subscription information to the algorithm platform; and the confirmation information is used for indicating the algorithm platform to adopt the subscribed service interface as a resource calling interface.
8. An apparatus for invoking algorithm resources, comprising:
the request processing module is used for responding to a first calling request which is received from an algorithm platform and aims at a target dispatcher, and determining a target resource platform in at least one resource platform connected with the target dispatcher; wherein each resource platform of the at least one resource platform is configured to process an execution node in a resource scheduling flow corresponding to the target scheduler;
the first sending module is used for sending a second calling request to the target resource platform; the second call request is used for indicating the target resource platform to process the execution node to obtain a corresponding node processing result;
a first receiving module, configured to receive the node processing result from the target resource platform, and obtain a call result of the resource scheduling flow based on the node processing result;
and the second sending module is used for sending the calling result to the algorithm platform.
9. The apparatus of claim 8, wherein the target scheduler comprises at least one connector corresponding to at least one execution node in the resource scheduling flow, respectively; each connector in the at least one connector is connected with at least one resource platform, and the at least one resource platform is used for processing the execution node corresponding to the connector.
10. The apparatus of claim 9, wherein the request processing module is to:
determining a currently called connector among the at least one connector based on the service interface receiving the first call request;
determining the target resource platform in at least one resource platform connected by the currently called connector.
11. The apparatus of claim 10, wherein the request processing module is to:
and determining the target resource platform in at least one resource platform connected by the currently called connector based on the resource requirement in the first calling request.
12. The apparatus of any of claims 8-11, further comprising a configuration module to:
determining a corresponding connector based on each executing node in the resource scheduling flow;
and configuring the target scheduler based on the connector corresponding to each execution node.
13. The apparatus of claim 12, wherein the configuration module is to:
determining at least one standardized service based on the service types of the resource platforms accessing the bus;
configuring at least one connector corresponding to the at least one standardized service, respectively, based on the at least one standardized service;
and selecting a connector corresponding to each execution node from the at least one connector based on the standardized service corresponding to each execution node in the resource scheduling flow.
14. The apparatus of any of claims 8-11, further comprising:
a second receiving module for receiving subscription information for the target scheduler from the algorithm platform;
a subscription processing module, configured to determine a subscribed service interface among the at least one service interface of the target scheduler based on the subscription information;
a third sending module, configured to send confirmation information for the subscription information to the algorithm platform; and the confirmation information is used for indicating the algorithm platform to adopt the subscribed service interface as a resource calling interface.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202310263804.XA 2023-03-17 2023-03-17 Method and device for calling algorithm resources, electronic equipment and storage medium Active CN115981872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310263804.XA CN115981872B (en) 2023-03-17 2023-03-17 Method and device for calling algorithm resources, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310263804.XA CN115981872B (en) 2023-03-17 2023-03-17 Method and device for calling algorithm resources, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115981872A true CN115981872A (en) 2023-04-18
CN115981872B CN115981872B (en) 2023-12-01

Family

ID=85970850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310263804.XA Active CN115981872B (en) 2023-03-17 2023-03-17 Method and device for calling algorithm resources, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115981872B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988418A (en) * 2020-08-28 2020-11-24 平安国际智慧城市科技股份有限公司 Data processing method, device, equipment and computer readable storage medium
CN112148494A (en) * 2020-09-30 2020-12-29 北京百度网讯科技有限公司 Processing method and device for operator service, intelligent workstation and electronic equipment
CN112199385A (en) * 2020-09-30 2021-01-08 北京百度网讯科技有限公司 Processing method and device for artificial intelligence AI, electronic equipment and storage medium
CN112486648A (en) * 2020-11-30 2021-03-12 北京百度网讯科技有限公司 Task scheduling method, device, system, electronic equipment and storage medium
WO2021135448A1 (en) * 2019-12-31 2021-07-08 ***股份有限公司 Service invocation method, apparatus, device, and medium
CN114756340A (en) * 2022-03-17 2022-07-15 中国联合网络通信集团有限公司 Computing power scheduling system, method, device and storage medium
CN115328663A (en) * 2022-10-10 2022-11-11 亚信科技(中国)有限公司 Method, device, equipment and storage medium for scheduling resources based on PaaS platform

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135448A1 (en) * 2019-12-31 2021-07-08 ***股份有限公司 Service invocation method, apparatus, device, and medium
CN111988418A (en) * 2020-08-28 2020-11-24 平安国际智慧城市科技股份有限公司 Data processing method, device, equipment and computer readable storage medium
CN112148494A (en) * 2020-09-30 2020-12-29 北京百度网讯科技有限公司 Processing method and device for operator service, intelligent workstation and electronic equipment
CN112199385A (en) * 2020-09-30 2021-01-08 北京百度网讯科技有限公司 Processing method and device for artificial intelligence AI, electronic equipment and storage medium
CN112486648A (en) * 2020-11-30 2021-03-12 北京百度网讯科技有限公司 Task scheduling method, device, system, electronic equipment and storage medium
CN114756340A (en) * 2022-03-17 2022-07-15 中国联合网络通信集团有限公司 Computing power scheduling system, method, device and storage medium
CN115328663A (en) * 2022-10-10 2022-11-11 亚信科技(中国)有限公司 Method, device, equipment and storage medium for scheduling resources based on PaaS platform

Also Published As

Publication number Publication date
CN115981872B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN108776934B (en) Distributed data calculation method and device, computer equipment and readable storage medium
CN109408205B (en) Task scheduling method and device based on hadoop cluster
US10652360B2 (en) Access scheduling method and apparatus for terminal, and computer storage medium
CN113849312B (en) Data processing task allocation method and device, electronic equipment and storage medium
CN113641457A (en) Container creation method, device, apparatus, medium, and program product
CN111078404B (en) Computing resource determining method and device, electronic equipment and medium
CN114911598A (en) Task scheduling method, device, equipment and storage medium
CN114840323A (en) Task processing method, device, system, electronic equipment and storage medium
CN111124640A (en) Task allocation method and system, storage medium and electronic device
CN111190719B (en) Method, device, medium and electronic equipment for optimizing cluster resource allocation
CN114296953A (en) Multi-cloud heterogeneous system and task processing method
CN112398669A (en) Hadoop deployment method and device
CN112104679A (en) Method, apparatus, device and medium for processing hypertext transfer protocol request
CN113419865A (en) Cloud resource processing method, related device and computer program product
CN114780228B (en) Hybrid cloud resource creation method and system
CN114327918B (en) Method and device for adjusting resource amount, electronic equipment and storage medium
CN114564249B (en) Recommendation scheduling engine, recommendation scheduling method and computer readable storage medium
CN115981872B (en) Method and device for calling algorithm resources, electronic equipment and storage medium
CN115328612A (en) Resource allocation method, device, equipment and storage medium
CN115567602A (en) CDN node back-to-source method, device and computer readable storage medium
CN114490000A (en) Task processing method, device, equipment and storage medium
CN114070889A (en) Configuration method, traffic forwarding method, device, storage medium, and program product
CN114237902A (en) Service deployment method and device, electronic equipment and computer readable medium
CN114036250A (en) High-precision map task processing method and device, electronic equipment and medium
CN114265692A (en) Service scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant