CN116415680A - Component adaptation method and device for machine learning task - Google Patents

Component adaptation method and device for machine learning task Download PDF

Info

Publication number
CN116415680A
CN116415680A CN202111667684.7A CN202111667684A CN116415680A CN 116415680 A CN116415680 A CN 116415680A CN 202111667684 A CN202111667684 A CN 202111667684A CN 116415680 A CN116415680 A CN 116415680A
Authority
CN
China
Prior art keywords
protocol
machine learning
component
learning task
computing engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111667684.7A
Other languages
Chinese (zh)
Inventor
马浩
郭春清
王妮
张钰
付豪
石光川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
4Paradigm Beijing Technology Co Ltd
Original Assignee
4Paradigm Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 4Paradigm Beijing Technology Co Ltd filed Critical 4Paradigm Beijing Technology Co Ltd
Priority to CN202111667684.7A priority Critical patent/CN116415680A/en
Publication of CN116415680A publication Critical patent/CN116415680A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Computer And Data Communications (AREA)

Abstract

Disclosed are a component adaptation method and apparatus for a machine learning task, the component comprising a native operator, the component adaptation method comprising: identifying a first protocol used by the native operator, and judging whether an underlying computing engine of a machine learning task supports the first protocol; converting the first protocol into a second protocol supported by the underlying computing engine when the first protocol is not supported by the underlying computing engine; and running the native operator on the underlying computing engine based on the second protocol to perform the machine learning task. The component adaptation method can reduce the complexity and the learning cost of each component, thereby improving the deployment efficiency and the execution efficiency of the machine learning task.

Description

Component adaptation method and device for machine learning task
Technical Field
The present disclosure relates generally to the field of artificial intelligence, and more particularly, to a component adaptation method and apparatus for machine learning tasks.
Background
In the big background of big data, new infrastructure, AIoT (ai+iot, artificial intelligence internet of things) and 5G (5 th Generation Mobile Communication Technology, fifth generation mobile communication technology), AI (Artificial Intelligence ) has driven industries to develop more efficiently and at a higher speed. At the same time, a number of AI-surrounding components, such as airflow, kubeflow, mlflow and datahub, etc., are emerging under the initiative of CNCF (Cloud Native Computing Foundation, cloud primary computing foundation) and ASF (Apache Software Foundation ). In order to better utilize the components of each community, integrate into the ecology of the community, and at the same time ensure that the civilian interaction experience of the upper product is unchanged, an adaptation capability is needed to integrate the components of each community, and digest the complexity and high learning cost of each component, thereby providing a consistent and uniform API (Application Programming Interface, application program interface) for the upper product, and not being fixed in using a specific component while meeting the product requirements and using the components.
Disclosure of Invention
The present disclosure provides a method and apparatus for component adaptation of machine learning tasks to at least address, or not address, the above-mentioned problems.
According to an aspect of the present disclosure, there is provided a component adaptation method of a machine learning task, the component including a native operator, the component adaptation method comprising: identifying a first protocol used by the native operator, and judging whether an underlying computing engine of a machine learning task supports the first protocol; converting the first protocol into a second protocol supported by the underlying computing engine when the first protocol is not supported by the underlying computing engine; and running the native operator on the underlying computing engine based on the second protocol to perform the machine learning task.
Optionally, converting the first protocol to a second protocol supported by the underlying computing engine includes: and converting the first protocol into a second protocol supported by the underlying computing engine based on a pre-established unified protocol standard.
Optionally, the component further comprises metadata, and the component adaptation method further comprises: identifying the metadata and judging whether the metadata supports the native operator or not; and when the metadata does not support the original algorithm, analyzing the metadata to acquire the data information of the original data corresponding to the metadata.
Optionally, based on the second protocol, running the native operator on the underlying computing engine to perform the machine learning task includes: based on the second protocol and the data information, the native operator is caused to run on the underlying computing engine in conjunction with the metadata to perform the machine learning task.
Optionally, the data information includes an access address and an access token, the access address and the access token being used to access the original data.
Optionally, based on the second protocol, running the native operator on the underlying computing engine to perform the machine learning task includes: based on the identity recognition and access management capability, authentication and authorization are performed when the native operator interfaces with the underlying computing engine.
According to another aspect of the present disclosure, there is provided a component adaptation apparatus of a machine learning task, the component including a native operator, the component adaptation apparatus comprising: a protocol identification unit configured to: identifying a first protocol used by the native operator, and judging whether an underlying computing engine of a machine learning task supports the first protocol; a protocol conversion unit configured to: converting the first protocol into a second protocol supported by the underlying computing engine when the first protocol is not supported by the underlying computing engine; an operator operation unit configured to: and running the native operator on the underlying computing engine based on the second protocol to perform the machine learning task.
Optionally, the protocol conversion unit is configured to: and converting the first protocol into a second protocol supported by the underlying computing engine based on a pre-established unified protocol standard.
Optionally, the component further comprises metadata, and the component adapting device further comprises: a metadata processing unit configured to: identifying the metadata and judging whether the metadata supports the native operator or not; and when the metadata does not support the original algorithm, analyzing the metadata to acquire the data information of the original data corresponding to the metadata.
Optionally, the operator execution unit is configured to: based on the second protocol and the data information, the native operator is caused to run on the underlying computing engine in conjunction with the metadata to perform the machine learning task.
Optionally, the data information includes an access address and an access token, the access address and the access token being used to access the original data.
Optionally, the operator execution unit is further configured to: based on the identity recognition and access management capability, authentication and authorization are performed when the native operator interfaces with the underlying computing engine.
According to another aspect of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform a component adaptation method of a machine learning task as described above.
According to another aspect of the present disclosure, there is provided a system comprising at least one computing device and at least one storage device storing instructions, wherein the instructions, when executed by the at least one computing device, cause the at least one computing device to perform a component adaptation method of a machine learning task as described above.
According to the component adaptation method and device for the machine learning task, various self-built, community or enterprise-side customer components can be integrated, the product requirement and the use of the components are met, meanwhile, the method and device for adapting the components for the machine learning task are not fixed to the use of a specific component, the complexity and the learning cost of each component are reduced, and the deployment efficiency and the execution efficiency of the machine learning task are improved.
Additional aspects and/or advantages of the present general inventive concept will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the general inventive concept.
Drawings
These and/or other aspects and advantages of the present disclosure will become apparent from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1a, 1b, 1c, 1d and 1e are application diagrams illustrating machine learning tasks.
FIG. 2 is a flowchart illustrating a component adaptation method of a machine learning task according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a component adaptation method of a machine learning task according to another exemplary embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating a component adaptation apparatus of a machine learning task according to an exemplary embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a component adaptation architecture according to an exemplary embodiment of the present disclosure;
Detailed Description
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of embodiments of the invention defined by the claims and their equivalents. Various specific details are included to aid understanding, but are merely to be considered exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
It should be noted that, in this disclosure, "at least one of the items" refers to a case where three types of juxtaposition including "any one of the items", "a combination of any of the items", "an entirety of the items" are included. For example, "including at least one of a and B" includes three cases side by side as follows: (1) comprises A; (2) comprising B; (3) includes A and B. For example, "at least one of the first and second steps is executed", that is, three cases are juxtaposed as follows: (1) performing step one; (2) executing the second step; (3) executing the first step and the second step.
Fig. 1a, 1b, 1c, 1d and 1e are application diagrams illustrating machine learning tasks.
The machine learning tasks as shown include a data phase, a model phase, a service phase, and a monitoring phase, which are implemented based on the OpenMLStudio platform. First, metadata (metadata) is needed to provide sample data for model training, and the metadata may be self-built metadata, metadata of each community, metadata of enterprise clients, or the like. Here, required data (not shown in the figure) may be searched, found, and subscribed in a control panel (dashboard) of the metadata service. Then, referring to fig. 1a, corresponding data can be selected according to the service requirement at the left menu tree of the OpenMLStudio user interface. Then, referring to fig. 1b, after selecting the corresponding data, an operator corresponding to the data may be selected. Then, referring to FIG. 1c, the desired DAG (Directed Acyclic Graph ) can be constructed according to a training strategy drag-and-drop based on the selected data and operators. Then, referring to FIG. 1d, after model training is completed, the model is hosted in a model repository, and the model is deployed through the tested and validated model to provide the model service (model service) capability. Then, referring to fig. 1e, on the basis of the service logs of the operators collected in a unified manner, the service quality of other operators can be monitored in real time by monitoring the data drift operator and the model drift operator. Based on the above-described process, various machine learning tasks such as a regression task, a classification task, a clustering task, or a dimension reduction task may be performed. It can be seen that in practical applications for performing machine learning tasks, the docking services of components such as various operators are involved.
According to the component adaptation method and device for the machine learning task, various self-built, community or enterprise end customer components can be integrated, the product requirements and the use of the components are met, meanwhile, the machine learning task is not fixed to the use of a specific component, and the complexity and learning cost of each component are reduced.
Component adaptation methods and apparatuses for machine learning tasks according to exemplary embodiments of the present disclosure are described in detail below with reference to fig. 2 through 5.
Fig. 2 is a flowchart illustrating a component adaptation method of a machine learning task according to an exemplary embodiment of the present disclosure. Here, the component may include a native operator. As an example, the native operator may be an operator in component ecology such as airlow, argo-workflow, or Kubeflow.
Referring to fig. 2, in step S201, a first protocol used by a native operator may be identified and it may be determined whether an underlying computational engine of a machine learning task supports the first protocol. As an example, when the native operator is an operator in an Airflow component ecology, the first protocol may be an Airflow protocol, and then it may be determined whether the underlying computational engine of the machine learning task supports the Airflow protocol.
Next, in step S202, when the underlying computing engine does not support the first protocol, the first protocol may be converted into a second protocol supported by the underlying computing engine. By way of example, when the native operator is an operator in the Airflow component ecology and the underlying computing engine is an Argo-workflow based computing engine, the protocol of Airflow used by the native operator may be converted to an Argo-workflow supported protocol by the underlying computing engine.
According to an exemplary embodiment of the present disclosure, the first protocol may be converted into a second protocol supported by the underlying computing engine based on a pre-established unified protocol standard. Here, various services can be sufficiently abstracted and refined, and unified protocol standards are prefabricated as a framework for protocol conversion, and efficiency can be improved by the framework for protocol conversion. As an example, a predefined unified protocol standard may be used as a bottom standard for various existing protocols, so that various existing protocols may be converted to each other based on the unified protocol standard, and for a subsequent new protocol, the unified protocol standard may be adaptively adjusted, so that the new protocol may be converted to various existing protocols based on the unified protocol standard; in addition, the unified protocol standard can also be used as the mutual conversion between the bottom layer protocol of various existing protocols and various existing protocols, so that the subsequent new protocol can be converted into the unified protocol standard first and then into other existing protocols, and the change amount is small and the expansibility is good when the new protocol is converted into other protocols.
Next, in step S203, the native operator may be run on the underlying computing engine based on the second protocol to perform a machine learning task. Here, the native operator may be adapted to the underlying compute engine by translating a first protocol used by the native operator to a second protocol supported by the underlying compute engine to run successfully on the underlying compute engine.
According to an exemplary embodiment of the present disclosure, when the underlying compute engine supports the first protocol, the native operator may run directly on the underlying compute engine. A component adaptation method of a machine learning task according to another exemplary embodiment of the present disclosure is described in detail below with reference to fig. 3.
Fig. 3 is a flowchart illustrating a component adaptation method of a machine learning task according to another exemplary embodiment of the present disclosure. Here, the component further includes metadata. As an example, the metadata may be metadata such as Datahub, amundsen, atlas, or the like.
Referring to fig. 3, in step S301, a first protocol used by a native operator may be identified and it may be determined whether an underlying computational engine of a machine learning task supports the first protocol.
Next, in step S302, when the underlying computing engine does not support the first protocol, the first protocol may be converted into a second protocol supported by the underlying computing engine.
Next, in step S303, metadata may be identified and it may be determined whether the metadata supports native operators.
Next, in step S304, when the metadata does not support the native algorithm, the metadata may be parsed to obtain data information of the original data (rawdata) corresponding to the metadata. Here, the data information may include an access address (address) and an access token (token), which may be used to access the original data, in other words, the original data may be located and acquired based on the access address and the access token, thereby providing data support for the native operator.
Next, in step S305, based on the second protocol and the data information, a native operator may be run on the underlying computing engine in combination with the metadata to perform a machine learning task. Here, the native operator may be adapted to the underlying computing engine by converting the first protocol used by the native operator into the second protocol supported by the underlying computing engine, and may provide data support for the native operator when running in the underlying computing engine by parsing the data information of the original data corresponding to the metadata.
Exemplary embodiments of the present disclosure may also authenticate authorization when the native operator interfaces with the underlying compute engine based on identity recognition and access management (Identity and Access Management, IAM) capabilities. By means of automatic adaptation of IAM capability, IAM capability can be achieved without the native operator itself, and compatibility of the native operator is improved.
According to the component adaptation method and device for the machine learning task, various self-built, community or enterprise-side customer components can be integrated, the product requirement and the use of the components are met, meanwhile, the method and device for adapting the components for the machine learning task are not fixed to the use of a specific component, the complexity and the learning cost of each component are reduced, and the deployment efficiency and the execution efficiency of the machine learning task are improved.
Fig. 4 is a block diagram illustrating a component adaptation apparatus of a machine learning task according to an exemplary embodiment of the present disclosure. Here, the component may include a native operator. As described above, the native operators may be operators in component ecology such as Airflow, argo-workflow, or Kubeflow. Component adaptation means for machine learning tasks according to exemplary embodiments of the present disclosure may be implemented in a computing device having sufficient computing capabilities.
Referring to fig. 4, a component adaptation apparatus 400 of a machine learning task according to an exemplary embodiment of the present disclosure may include a protocol recognition unit 410, a protocol conversion unit 420, and an operator execution unit 430.
The protocol identification unit 410 may identify a first protocol used by the native operator and determine whether an underlying computational engine of the machine learning task supports the first protocol.
When the underlying computing engine does not support the first protocol, the protocol conversion unit 420 may convert the first protocol into a second protocol supported by the underlying computing engine.
The operator execution unit 430 may cause the native operators to execute on the underlying computational engine based on the second protocol to perform machine learning tasks.
According to an exemplary embodiment of the present disclosure, the protocol conversion unit 420 may convert the first protocol into a second protocol supported by the underlying computing engine based on a predetermined unified protocol standard.
As mentioned above, the component may further comprise metadata, and the component adapting means may further comprise a metadata processing unit (not shown). The metadata processing unit can identify metadata and judge whether the metadata supports a native operator or not; when the metadata does not support the primitive algorithm, the metadata processing unit may parse the metadata to obtain data information of the original data corresponding to the metadata.
On this basis, the operator execution unit 430 may cause the native operator to execute the machine learning task in combination with the metadata on the underlying computing engine based on the second protocol and the data information. Here, the data information may include an access address and an access token, which may be used to access the original data.
Operator runtime unit 430 may also perform authentication authorization when the native operator interfaces with the underlying compute engine based on the identity recognition and access management capabilities.
According to the component adaptation method and device for the machine learning task, various self-built, community or enterprise-side customer components can be integrated, the product requirement and the use of the components are met, meanwhile, the method and device for adapting the components for the machine learning task are not fixed to the use of a specific component, the complexity and the learning cost of each component are reduced, and the deployment efficiency and the execution efficiency of the machine learning task are improved. The component adapting architecture according to an exemplary embodiment of the present disclosure is described in detail below with reference to fig. 5.
Fig. 5 is an illustration showing a component adaptation architecture according to an exemplary embodiment of the present disclosure.
Referring to fig. 5, an adapter may be used as the core of the component adaptation architecture to provide an adaptation capability. Taking the OpenMLStudio platform junction adapter as an example, as described above, when executing a machine learning task based on the OpenMLStudio platform, components such as metadata and various operators need to be docked. The ecological operators of the components such as Airflow or Kubeflow can be integrated through the adapter, so that the native operators can be directly operated on the underlying computing engine through the adapter. Here, as shown in fig. 5, the underlying computing engine may be, but is not limited to, a computing engine based on Airflow, argo-workflow, kubeflow, or the like in a kuubernes computing cluster. Specifically, the Recognition (Recognition) and conversion (Transform) of the first protocol used by the native operator may be implemented through the adapter, and the metadata may be recognized through the adapter, and it may be determined whether the metadata needs to be processed, that is, whether data information such as address and token of the original data corresponding to the metadata needs to be calculated on the basis of understanding the semantics of the metadata. In addition, the IAM capability can be automatically adapted through the adapter, so that the native operator does not need to realize the IAM capability, and a preset unified protocol standard can be provided through the adapter as a framework of protocol conversion, and the efficiency can be improved through the framework of protocol conversion.
Component adaptation methods and apparatuses for machine learning tasks according to exemplary embodiments of the present disclosure have been described above with reference to fig. 2 through 5.
The various units in the component adaptation device of the machine learning task shown in fig. 4 may be configured as software, hardware, firmware or any combination of the above to perform a specific function. For example, each unit may correspond to an application specific integrated circuit, may correspond to a pure software code, or may correspond to a module in which software is combined with hardware. Furthermore, one or more functions implemented by the respective units may also be uniformly performed by components in a physical entity device (e.g., a processor, a client, a server, or the like).
Furthermore, the component adaptation method of the machine learning task described with reference to fig. 2 may be implemented by a program (or instructions) recorded on a computer-readable storage medium. For example, according to an exemplary embodiment of the present disclosure, a computer-readable storage medium storing instructions may be provided, wherein the instructions, when executed by at least one computing device, cause the at least one computing device to perform a component adaptation method of a machine learning task according to the present disclosure.
The computer program in the above-described computer-readable storage medium may be run in an environment deployed in a computer device such as a client, a host, a proxy device, a server, etc., and it should be noted that the computer program may also be used to perform additional steps other than the above-described steps or to perform more specific processes when the above-described steps are performed, and the contents of these additional steps and further processes have been mentioned in the description of the related method with reference to fig. 2, so that a repetition will not be repeated here.
It should be noted that each unit in the component adapting device of the machine learning task according to the exemplary embodiment of the present disclosure may completely depend on the execution of the computer program to implement the corresponding function, i.e., each unit corresponds to each step in the functional architecture of the computer program, so that the entire system is called through a special software package (e.g., lib library) to implement the corresponding function.
On the other hand, the respective units shown in fig. 4 may also be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the corresponding operations may be stored in a computer-readable medium, such as a storage medium, so that the processor can perform the corresponding operations by reading and executing the corresponding program code or code segments.
For example, exemplary embodiments of the present disclosure may also be implemented as a computing device including a storage component having a set of computer-executable instructions stored therein that, when executed by a processor, perform a component adaptation method for machine learning tasks according to exemplary embodiments of the present disclosure.
In particular, the computing devices may be deployed in servers or clients, as well as on node devices in a distributed network environment. Further, the computing device may be a PC computer, tablet device, personal digital assistant, smart phone, web application, or other device capable of executing the above set of instructions.
Here, the computing device is not necessarily a single computing device, but may be any device or aggregate of circuits capable of executing the above-described instructions (or instruction set) alone or in combination. The computing device may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with locally or remotely (e.g., via wireless transmission).
In a computing device, the processor may include a Central Processing Unit (CPU), a Graphics Processor (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
Some operations described in the component adaptation method of the machine learning task according to the exemplary embodiment of the present disclosure may be implemented in a software manner, some operations may be implemented in a hardware manner, and furthermore, the operations may be implemented in a combination of software and hardware.
The processor may execute instructions or code stored in one of the memory components, where the memory component may also store data. The instructions and data may also be transmitted and received over a network via a network interface device, which may employ any known transmission protocol.
The memory component may be integrated with the processor, for example, RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, the storage component may comprise a stand-alone device, such as an external disk drive, a storage array, or any other storage device usable by a database system. The storage component and the processor may be operatively coupled or may communicate with each other, such as through an I/O port, network connection, etc., such that the processor is able to read files stored in the storage component.
In addition, the computing device may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the computing device may be connected to each other via buses and/or networks.
Component adaptation methods for machine learning tasks according to exemplary embodiments of the present disclosure may be described as various interconnected or coupled functional blocks or functional diagrams. However, these functional blocks or functional diagrams may be equally integrated into a single logic device or operate at non-exact boundaries.
Thus, the component adaptation method of the machine learning task described with reference to fig. 2 may be implemented by a system comprising at least one computing device and at least one storage device storing instructions.
According to an exemplary embodiment of the present disclosure, the at least one computing device is a computing device for performing a component adaptation method of a machine learning task according to an exemplary embodiment of the present disclosure, a set of computer-executable instructions being stored in the storage device, which, when executed by the at least one computing device, performs the component adaptation method of a machine learning task described with reference to fig. 2.
The foregoing description of exemplary embodiments of the present disclosure has been presented only to be understood as illustrative and not exhaustive, and the present disclosure is not limited to the exemplary embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. Accordingly, the scope of the present disclosure should be determined by the scope of the claims.

Claims (10)

1. A component adaptation method of a machine learning task, wherein the component comprises native operators, the component adaptation method comprising:
identifying a first protocol used by the native operator, and judging whether an underlying computing engine of a machine learning task supports the first protocol;
converting the first protocol into a second protocol supported by the underlying computing engine when the first protocol is not supported by the underlying computing engine;
and running the native operator on the underlying computing engine based on the second protocol to perform the machine learning task.
2. The component adaptation method of claim 1, wherein converting the first protocol to a second protocol supported by the underlying computing engine comprises:
and converting the first protocol into a second protocol supported by the underlying computing engine based on a pre-established unified protocol standard.
3. The component adaptation method of claim 1, wherein the component further comprises metadata, the component adaptation method further comprising:
identifying the metadata and judging whether the metadata supports the native operator or not;
and when the metadata does not support the original algorithm, analyzing the metadata to acquire the data information of the original data corresponding to the metadata.
4. The component adaptation method of claim 3, wherein, based on the second protocol, causing the native operator to run on the underlying computational engine to perform the machine learning task comprises:
based on the second protocol and the data information, the native operator is caused to run on the underlying computing engine in conjunction with the metadata to perform the machine learning task.
5. A component adaptation method as claimed in claim 3, wherein the data information comprises an access address and an access token for accessing the original data.
6. The component adaptation method of claim 1 or 4, wherein, based on the second protocol, causing the native operator to run on the underlying computing engine to perform the machine learning task comprises:
based on the identity recognition and access management capability, authentication and authorization are performed when the native operator interfaces with the underlying computing engine.
7. A component adaptation apparatus of a machine learning task, wherein the component comprises a native operator, the component adaptation apparatus comprising:
a protocol identification unit configured to: identifying a first protocol used by the native operator, and judging whether an underlying computing engine of a machine learning task supports the first protocol;
a protocol conversion unit configured to: converting the first protocol into a second protocol supported by the underlying computing engine when the first protocol is not supported by the underlying computing engine;
an operator operation unit configured to: and running the native operator on the underlying computing engine based on the second protocol to perform the machine learning task.
8. The component adapting device according to claim 7, wherein the protocol conversion unit is configured to: and converting the first protocol into a second protocol supported by the underlying computing engine based on a pre-established unified protocol standard.
9. A computer-readable storage medium storing instructions that, when executed by at least one computing device, cause the at least one computing device to perform the component adaptation method of the machine learning task of any one of claims 1-6.
10. A system comprising at least one computing device and at least one storage device storing instructions that, when executed by the at least one computing device, cause the at least one computing device to perform the component adaptation method of the machine learning task of any one of claims 1 to 6.
CN202111667684.7A 2021-12-31 2021-12-31 Component adaptation method and device for machine learning task Pending CN116415680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111667684.7A CN116415680A (en) 2021-12-31 2021-12-31 Component adaptation method and device for machine learning task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667684.7A CN116415680A (en) 2021-12-31 2021-12-31 Component adaptation method and device for machine learning task

Publications (1)

Publication Number Publication Date
CN116415680A true CN116415680A (en) 2023-07-11

Family

ID=87056803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667684.7A Pending CN116415680A (en) 2021-12-31 2021-12-31 Component adaptation method and device for machine learning task

Country Status (1)

Country Link
CN (1) CN116415680A (en)

Similar Documents

Publication Publication Date Title
KR102493449B1 (en) Edge computing test methods, devices, electronic devices and computer-readable media
CN109542399B (en) Software development method and device, terminal equipment and computer readable storage medium
US11762634B2 (en) Systems and methods for seamlessly integrating multiple products by using a common visual modeler
US9363195B2 (en) Configuring cloud resources
CN111797969A (en) Neural network model conversion method and related device
US10810220B2 (en) Platform and software framework for data intensive applications in the cloud
US11100233B2 (en) Optimizing operating system vulnerability analysis
US8849947B1 (en) IT discovery of virtualized environments by scanning VM files and images
CN110895471A (en) Installation package generation method, device, medium and electronic equipment
US11934287B2 (en) Method, electronic device and computer program product for processing data
CN105721451B (en) A kind of prolongable Modbus protocol analysis method and device
CN111026439A (en) Application program compatibility method, device, equipment and computer storage medium
CN111414154A (en) Method and device for front-end development, electronic equipment and storage medium
CN114253798A (en) Index data acquisition method and device, electronic equipment and storage medium
CN112035270A (en) Interface adaptation method, system, device, computer readable medium and electronic equipment
WO2024001240A1 (en) Task integration method and apparatus for multiple technology stacks
CN112491940A (en) Request forwarding method and device of proxy server, storage medium and electronic equipment
CN116248526A (en) Method and device for deploying container platform and electronic equipment
CN116415680A (en) Component adaptation method and device for machine learning task
CN115392501A (en) Data acquisition method and device, electronic equipment and storage medium
CN113742385A (en) Data query method and device
CN112181401A (en) Application construction method and application construction platform
US20230289354A1 (en) Endpoint scan and profile generation
CN116627682B (en) Remote industrial information detection method and device based on shared memory
CN116302847B (en) Dynamic acquisition method and device of abnormal information, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination