CN114047921A - Inference engine development platform, method, electronic equipment and storage medium - Google Patents

Inference engine development platform, method, electronic equipment and storage medium Download PDF

Info

Publication number
CN114047921A
CN114047921A CN202111347748.5A CN202111347748A CN114047921A CN 114047921 A CN114047921 A CN 114047921A CN 202111347748 A CN202111347748 A CN 202111347748A CN 114047921 A CN114047921 A CN 114047921A
Authority
CN
China
Prior art keywords
engine
inference engine
platform
reasoning
inference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111347748.5A
Other languages
Chinese (zh)
Inventor
王常凯
黄雷
陈龙
季映羽
袁野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111347748.5A priority Critical patent/CN114047921A/en
Priority to CN202211235537.7A priority patent/CN115454446A/en
Publication of CN114047921A publication Critical patent/CN114047921A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Stored Programmes (AREA)

Abstract

The disclosure provides a reasoning engine development platform, a reasoning engine development method, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of deep learning and computer vision. The specific implementation scheme is as follows: an inference engine development platform comprising: the cross-platform reasoning module and the reasoning engine calling module; the cross-platform reasoning module is used for defining a reasoning engine base class in a cross-platform manner, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines; and the inference engine calling module is used for acquiring configuration information aiming at the inference engine to be configured, calling the inference engine to be configured based on the definition of the inference engine subclass in the inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain the target inference engine. The development of an inference engine is realized.

Description

Inference engine development platform, method, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical field of deep learning and computer vision.
Background
With the development of Artificial Intelligence (AI), an AI capability is commonly deployed at a server end in the past, and an AI technical processing mode is used to operate and obtain a result through data uploaded by a user and return the result to the user. Thanks to the explosion of semiconductor technology, the AI technology can be implemented by an AI specific chip and its corresponding inference engine, thus opening a new situation for the AI technology.
Disclosure of Invention
The disclosure provides an inference engine development platform, a method, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided an inference engine development platform, including:
the cross-platform reasoning module and the reasoning engine calling module;
the cross-platform reasoning module is used for defining a reasoning engine base class in a cross-platform manner, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines;
and the inference engine calling module is used for acquiring configuration information aiming at the inference engine to be configured, calling the inference engine to be configured based on the definition of the inference engine subclass in the inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain the target inference engine.
According to another aspect of the present disclosure, there is provided a method for developing an inference engine, applied to an inference engine development platform, including:
defining a reasoning engine base class in a cross-platform mode by utilizing a cross-platform reasoning module, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines;
and acquiring configuration information aiming at the inference engine to be configured by utilizing an inference engine calling module, calling the inference engine to be configured based on the definition of an inference engine subclass under an inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain a target inference engine.
The inference engine development platform provided by the present disclosure includes: the cross-platform reasoning module and the reasoning engine calling module; the cross-platform reasoning module is used for defining a reasoning engine base class in a cross-platform manner, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines; the inference engine calling module is used for acquiring configuration information aiming at the inference engine to be configured, calling the inference engine to be configured based on the definition of the inference engine subclass in the inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain a target inference engine. The development of an inference engine is realized.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic structural diagram of an inference engine development platform provided according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating a first inference engine development method provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow chart diagram of a second inference engine development method provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a block diagram of an electronic device for implementing the inference engine development method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related art, a variety of AI chips have their respective corresponding AI inference engines, and when an AI chip is applied to product development, a developer needs to know configuration information such as features and parameters of the various inference engines and their corresponding AI chips, as well as data structures and usage modes. Because various inference engines and corresponding AI chips have features and are different greatly, and the types of the AI inference engines and the AI chips are still increasing, great challenges are brought to the early development, the later maintenance, the iterative update along with the time and the like of developers.
To address this issue, the present disclosure provides an inference engine development platform comprising:
the cross-platform reasoning module and the reasoning engine calling module;
the cross-platform reasoning module is used for defining a reasoning engine base class in a cross-platform manner, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines;
and the inference engine calling module is used for acquiring configuration information aiming at the inference engine to be configured, calling the inference engine to be configured based on the definition of the inference engine subclass in the inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain the target inference engine.
Therefore, the inference engine development platform provided by the disclosure utilizes the cross-platform inference module to define the base class of the inference engine in a cross-platform manner, can integrate various different inference engines into the inference engine development platform provided by the disclosure, and realizes the cross-platform use of the inference engine. The configuration information aiming at the inference engine to be configured is obtained through the inference engine calling module, the inference engine to be configured is called, the inference engine to be configured is configured through the configuration information, and the target inference engine is obtained, so that the inference engine development platform based on the method can configure and call various inference engines, the required inference engine can be flexibly configured through different configuration information, and the development efficiency of AI application development through different inference engines is effectively improved.
The inference engine development platform provided by the present disclosure is explained in detail by specific embodiments below.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an inference engine development platform according to an embodiment of the present disclosure, including: the cross-platform reasoning module and the reasoning engine calling module;
and the cross-platform reasoning module is used for defining a reasoning engine base class in a cross-platform manner, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines.
In the field of artificial intelligence, inference engines are used in systems that apply logical rules to knowledge bases to infer new information, which can be a deep learning framework that can be used to develop AI applications. The basic inference engine comprises a plurality of types, and the inference engines of different types can respectively correspond to an inference engine base class, so that the inference engine base class defined by the cross-platform inference module can comprise inference engine base classes corresponding to the inference engines of the plurality of types. Each inference engine base class comprises a plurality of inference engine subclasses, and different inference engine subclasses correspond to different inference engines.
In one example, the inference engine base classes may include the inference engine base classes corresponding to the six types of inference engines paddleInference, Paddlelite, Amba, nknn, Tensorrt, and the accelerated deep learning framework.
And the inference engine calling module is used for acquiring configuration information aiming at the inference engine to be configured, calling the inference engine to be configured based on the definition of the inference engine subclass in the inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain the target inference engine.
The configuration information can be set according to requirements, the inference engines configured by different configuration information are different, different inference engine subclasses included in the same inference engine base class respectively have the same type of basis, and the inference engines with different configuration information are different.
In one example, the types of the configuration information may include: class AmbaConfig (amba corresponding configuration information), class PaddleLiteConfig (configuration information type corresponding to the flying slurry), class paddleinferofenconfig (configuration information corresponding to the model inference), class rknnnfig (configuration information corresponding to Rknn), class NnieConfig (configuration information corresponding to Nnie), and class TensorrtConfig (configuration information corresponding to Tensorrt) correspond to the above six types of inference engines, respectively.
The inference engine to be configured is an inference engine that can be configured based on different configuration information. After the configuration information set for the inference engine to be configured is obtained, the inference engine to be configured can be called based on the definition of the inference engine subclass in the inference engine base class, then the inference engine to be configured is configured by using the configuration information, and the configured inference engine is the target inference engine.
Therefore, the inference engine development platform provided by the disclosure utilizes the cross-platform inference module to define the base class of the inference engine in a cross-platform manner, can integrate various different inference engines into the inference engine development platform provided by the disclosure, and realizes the cross-platform use of the inference engine. The configuration information aiming at the inference engine to be configured is obtained through the inference engine calling module, the inference engine to be configured is called, the inference engine to be configured is configured through the configuration information, and the target inference engine is obtained, so that the inference engine development platform based on the method can configure and call various inference engines, the required inference engine can be flexibly configured through different configuration information, and the development efficiency of AI application development through different inference engines is effectively improved.
In an embodiment of the present disclosure, the platform further includes:
a cross-platform compiling module, wherein the cross-platform compiling module comprises a cross-compiling Docker container (application container engine) packed by a cross-compiling tool chain;
and the cross-platform compiling module is specifically used for completing cross-platform conversion of the configuration information by using the cross-compiling Docker container.
The cross compiling tool chain is a comprehensive development environment which consists of a compiler, a connector and an interpreter and is used for cross-platform compiling, and the Docker container is an application container engine and can pack and pack the cross compiling tool chain therein. The cross-compiling Docker container obtained by packing the cross-compiling tool chain can realize the conversion of the coding types among different platforms, and aiming at the scene that the coding type used by the user side is different from the coding type used by the inference engine to be configured, the cross-compiling Docker container can be used for realizing the conversion of the coding type of the configuration information, namely the cross-platform conversion of the configuration information, so that the coding type of the configuration information is the same as the coding type used by the inference engine to be configured, and the cross-compiling Docker container can be used for developing and configuring the inference engine to be configured.
Configuration modes, configuration information and the like of inference engines of different platforms (different coding types) are different, cross-platform conversion of the configuration information is completed by using a cross-compiling Docker container, and the different configuration information can be converted, so that the inference engines to be configured can be configured by the different configuration information based on an inference engine development platform.
Therefore, the inference engine development platform provided by the disclosure can perform cross-platform conversion on the configuration information, so that the configuration information of different types of inference engines can be directly configured for the inference engine to be configured based on the inference engine development platform provided by the disclosure.
In an embodiment of the present disclosure, the platform further includes:
the cross-platform conversion module comprises a platform conversion Docker container packaged by a platform conversion tool;
and the cross-platform conversion module is specifically used for completing cross-platform conversion of the target inference engine by using the platform conversion Docker container.
The platform conversion tool is a tool for converting the operation platform of the inference engine, and can be assembled in a Docker container to obtain a platform conversion Docker container. The characteristics, calling modes and the like of the inference engines of different platforms are different, because the inference engines cannot be directly used in a cross-platform mode due to different factors such as coding types and operating environments, cross-platform conversion of the target inference engine is completed by utilizing a platform conversion Docker container, the target inference engines of different coding types can be converted, and various target inference engines with differences can be called based on an inference engine development platform.
Therefore, the inference engine development platform provided by the disclosure can perform cross-platform conversion on the target inference engine, so that different types of target inference engines can be directly called based on the inference engine development platform provided by the disclosure.
In an embodiment of the present disclosure, the inference engine calling module is specifically configured to obtain configuration information for an inference engine to be configured; based on the definition of the subclass of the reasoning engine under the base class of the reasoning engine, calling the reasoning engine to be configured, and configuring the reasoning engine to be configured by utilizing the configuration information to obtain a target reasoning engine; and adding a reasoning engine subclass of the target reasoning engine in a reasoning engine base class.
As mentioned above, after the configuration information for the inference engine to be configured is obtained, the inference engine to be configured may be called based on the definition of the inference engine subclass in the inference engine base class, and the inference engine to be configured is configured by using the configuration information, so as to obtain the target inference engine meeting the requirement. The target inference engine obtained at this time may be different from each inference engine subclass included in the inference engine base class for the inference engine base class in which the target inference engine is located, and may be a new inference engine configured based on the configuration information, and at this time, a new inference engine subclass corresponding to the target inference engine may be added to the inference engine base class without affecting other subclasses in the inference engine base class.
Therefore, the inference engine development platform provided by the disclosure can obtain a new inference engine through configuration information configuration, and can obtain an updated inference engine only by updating the configuration information, thereby effectively improving the development efficiency of the inference engine.
In an embodiment of the present disclosure, the inference engine invoking module is further configured to: acquiring a target engine pointer of the target inference engine based on the defined inference engine subclass of the target inference engine; acquiring target image data; calling a target reasoning engine according to the target engine pointer to process the target image data to obtain an image reasoning result; and deriving an image inference result of the target image data according to the target engine pointer.
After the target inference engine is obtained, a target engine pointer of the target inference engine can be obtained based on the defined inference engine subclass of the target inference engine. When the target inference engine is called, calling is required according to the target engine pointer, so that the target inference engine analyzes and processes target image data to obtain an image inference result, and finally the image inference result is derived according to the target inference engine pointer.
Therefore, the inference engine development platform provided by the disclosure calls the target inference engine to process the image data according to the target engine pointer, and then derives the image inference result according to the target engine pointer, so that a developer can obtain the image inference result by depending on the inference engine development platform without learning the called target inference engine.
In an embodiment of the present disclosure, the inference engine invoking module is specifically configured to: and acquiring image data to be inferred, and converting the image data to be inferred into a multi-dimensional matrix form to obtain target image data.
The image data to be inferred can be image data with various formats, the image data can be converted into a multi-dimensional matrix form after being acquired to obtain target image data, and then the target image data with the multi-dimensional matrix form can be transmitted into the called target inference engine, so that the target inference engine processes the target image data.
Therefore, the inference engine development platform provided by the disclosure can convert the image data to be inferred into a multi-dimensional matrix form, can improve the security of the image data, and enables the target inference engine to smoothly process the image data.
On the other hand, referring to fig. 2, fig. 2 is a schematic flowchart of a first inference engine development method provided by the present disclosure, applied to an inference engine development platform, and includes the following steps S21-S22:
step S21: and defining a reasoning engine base class in a cross-platform mode by utilizing a cross-platform reasoning module, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines.
Step S22: and acquiring configuration information aiming at the inference engine to be configured by utilizing an inference engine calling module, calling the inference engine to be configured based on the definition of an inference engine subclass under an inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain a target inference engine.
The method of the embodiment of the disclosure is applied to the intelligent terminal, can be implemented by the intelligent terminal, and in the actual use process, the intelligent terminal can be a computer, an intelligent mobile phone and the like.
Therefore, the inference engine development method provided by the disclosure utilizes the cross-platform inference module to define the inference engine base class in a cross-platform manner, and can integrate various different inference engines on the inference engine development platform provided by the disclosure. The configuration information aiming at the inference engine to be configured is obtained through the inference engine calling module, the inference engine to be configured is called, the inference engine to be configured is configured through the configuration information, and the target inference engine is obtained, so that the inference engine development platform based on the method can configure and call various inference engines, the required inference engine can be flexibly configured through different configuration information, and the development efficiency of AI application development through different inference engines is effectively improved.
In an embodiment of the present disclosure, the method further includes:
and based on a cross-platform compiling module, utilizing a cross-compiling Docker container to complete cross-platform conversion of the configuration information, wherein the cross-platform compiling module comprises the cross-compiling Docker container obtained by packing a cross-compiling tool chain.
Therefore, the inference engine development method provided by the disclosure can perform cross-platform conversion on the configuration information, so that the configuration information of different types of inference engines can be directly configured for the inference engine to be configured based on the inference engine development platform provided by the disclosure.
In an embodiment of the present disclosure, the method further includes:
and based on a cross-platform conversion module, completing cross-platform conversion of the target inference engine by using a platform conversion Docker container, wherein the cross-platform conversion module comprises the platform conversion Docker container packaged by a platform conversion tool.
Therefore, the inference engine development method provided by the disclosure can perform cross-platform conversion on the target inference engine, so that different types of target inference engines can be directly called based on the inference engine development platform provided by the disclosure.
In an embodiment of the present disclosure, referring to fig. 3, fig. 3 is a flowchart illustrating a second inference engine development method provided by the present disclosure, after obtaining a target inference engine, the method further includes:
step S33: and adding the inference engine subclass of the target inference engine in the inference engine base class by utilizing the inference engine calling module.
Therefore, the inference engine development method provided by the disclosure can obtain a new inference engine through configuration information configuration, and can obtain an updated inference engine only by updating the configuration information, thereby effectively improving the development efficiency of the inference engine.
In an embodiment of the present disclosure, the method further includes:
acquiring a target engine pointer of a target inference engine based on a defined inference engine subclass of the target inference engine by utilizing an inference engine calling module; acquiring target image data; calling a target reasoning engine according to the target engine pointer to process the target image data to obtain an image reasoning result; and deriving an image inference result of the target image data according to the target engine pointer.
Therefore, the inference engine development method provided by the disclosure calls the target inference engine to process the image data according to the target engine pointer, and then derives the image inference result according to the target engine pointer, so that a developer can obtain the image inference result by depending on the inference engine development platform without learning the called target inference engine.
In an embodiment of the present disclosure, the method further includes:
and acquiring image data to be inferred by utilizing an inference engine calling module, and converting the image data to be inferred into a multi-dimensional matrix form to obtain target image data.
Therefore, the inference engine development method provided by the disclosure can convert the image data to be inferred into a multi-dimensional matrix form, can improve the security of the image data, and enables the target inference engine to smoothly process the image data.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 4 shows a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 400 includes a computing unit 401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the device 400 can also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 401 executes the respective methods and processes described above, such as the inference engine development method. For example, in some embodiments, the inference engine development method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When loaded into RAM 403 and executed by computing unit 401, may perform one or more of the steps of the inference engine development method described above. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the inference engine development method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. An inference engine development platform comprising:
the cross-platform reasoning module and the reasoning engine calling module;
the cross-platform reasoning module is used for defining a reasoning engine base class in a cross-platform manner, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines;
the inference engine calling module is used for acquiring configuration information aiming at the inference engine to be configured, calling the inference engine to be configured based on the definition of the inference engine subclass in the inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain a target inference engine.
2. The platform of claim 1, further comprising:
a cross-platform compiling module, wherein the cross-platform compiling module comprises a cross-compiling Docker container obtained by packing a cross-compiling tool chain;
the cross-platform compiling module is specifically configured to complete cross-platform conversion of the configuration information by using the cross-compiling Docker container.
3. The platform of claim 1, further comprising:
the cross-platform conversion module comprises a platform conversion Docker container packaged by a platform conversion tool;
the cross-platform conversion module is specifically configured to complete cross-platform conversion of the target inference engine by using the platform conversion Docker container.
4. The platform according to claim 1, wherein the inference engine invoking module is specifically configured to obtain configuration information for an inference engine to be configured; based on the definition of the reasoning engine subclass in the reasoning engine base class, calling the reasoning engine to be configured, and configuring the reasoning engine to be configured by using the configuration information to obtain a target reasoning engine; and adding the reasoning engine subclass of the target reasoning engine in the reasoning engine base class.
5. The platform of claim 4, the inference engine invocation module further to: acquiring a target engine pointer of the target inference engine based on the defined inference engine subclass of the target inference engine; acquiring target image data; calling the target inference engine according to the target engine pointer to process the target image data to obtain an image inference result; and deriving an image inference result of the target image data according to the target engine pointer.
6. The platform of claim 5, the inference engine invocation module being specifically configured to: acquiring image data to be inferred, and converting the image data to be inferred into a multi-dimensional matrix form to obtain target image data.
7. A reasoning engine development method is applied to a reasoning engine development platform and comprises the following steps:
defining a reasoning engine base class in a cross-platform mode by utilizing a cross-platform reasoning module, wherein the reasoning engine base class comprises a plurality of reasoning engine subclasses, and different reasoning engine subclasses correspond to different reasoning engines;
and acquiring configuration information aiming at the inference engine to be configured by utilizing an inference engine calling module, calling the inference engine to be configured based on the definition of an inference engine subclass under the inference engine base class, and configuring the inference engine to be configured by utilizing the configuration information to obtain a target inference engine.
8. The method of claim 7, further comprising:
and completing cross-platform conversion of the configuration information by using a cross-compiling Docker container based on a cross-platform compiling module, wherein the cross-platform compiling module comprises the cross-compiling Docker container obtained by packaging a cross-compiling tool chain.
9. The method of claim 7, further comprising:
and completing the cross-platform conversion of the target inference engine by using a platform conversion Docker container based on a cross-platform conversion module, wherein the cross-platform conversion module comprises the platform conversion Docker container packaged by a platform conversion tool.
10. The method of claim 7, after obtaining the target inference engine, further comprising:
and adding the inference engine subclass of the target inference engine in the inference engine base class by utilizing the inference engine calling module.
11. The method of claim 10, further comprising:
acquiring a target engine pointer of the target inference engine based on the defined inference engine subclass of the target inference engine by utilizing the inference engine calling module; acquiring target image data; calling the target inference engine according to the target engine pointer to process the target image data to obtain an image inference result; and deriving an image inference result of the target image data according to the target engine pointer.
12. The method of claim 11, further comprising:
and acquiring image data to be inferred by utilizing the inference engine calling module, and converting the image data to be inferred into a multi-dimensional matrix form to obtain target image data.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 7-12.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 7-12.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 7-12.
CN202111347748.5A 2021-11-15 2021-11-15 Inference engine development platform, method, electronic equipment and storage medium Pending CN114047921A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111347748.5A CN114047921A (en) 2021-11-15 2021-11-15 Inference engine development platform, method, electronic equipment and storage medium
CN202211235537.7A CN115454446A (en) 2021-11-15 2021-11-15 Inference engine development platform, method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111347748.5A CN114047921A (en) 2021-11-15 2021-11-15 Inference engine development platform, method, electronic equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202211235537.7A Division CN115454446A (en) 2021-11-15 2021-11-15 Inference engine development platform, method, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114047921A true CN114047921A (en) 2022-02-15

Family

ID=80209036

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211235537.7A Pending CN115454446A (en) 2021-11-15 2021-11-15 Inference engine development platform, method, electronic equipment and storage medium
CN202111347748.5A Pending CN114047921A (en) 2021-11-15 2021-11-15 Inference engine development platform, method, electronic equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202211235537.7A Pending CN115454446A (en) 2021-11-15 2021-11-15 Inference engine development platform, method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (2) CN115454446A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881236A (en) * 2022-06-02 2022-08-09 广联达科技股份有限公司 Model reasoning system, method and equipment
CN116185532A (en) * 2023-04-18 2023-05-30 之江实验室 Task execution system, method, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881236A (en) * 2022-06-02 2022-08-09 广联达科技股份有限公司 Model reasoning system, method and equipment
CN116185532A (en) * 2023-04-18 2023-05-30 之江实验室 Task execution system, method, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN115454446A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
US20150095759A1 (en) Rendering interpreter for visualizing data provided from restricted environment container
CN111625738B (en) APP target page calling method, device, equipment and storage medium
CN114047921A (en) Inference engine development platform, method, electronic equipment and storage medium
CN111259037B (en) Data query method and device based on rule configuration, storage medium and terminal
CN116541497A (en) Task type dialogue processing method, device, equipment and storage medium
CN112925587A (en) Method and apparatus for initializing applications
WO2023221416A1 (en) Information generation method and apparatus, and device and storage medium
CN115509522A (en) Interface arranging method and system for low-code scene and electronic equipment
CN114490116B (en) Data processing method and device, electronic equipment and storage medium
CN114297119B (en) Intelligent contract execution method, device, equipment and storage medium
CN112835615B (en) Plug-in processing method and device for software development kit and electronic equipment
CN114443076A (en) Mirror image construction method, device, equipment and storage medium
CN114201156A (en) Access method, device, electronic equipment and computer storage medium
CN116302218B (en) Function information adding method, device, equipment and storage medium
CN113590217B (en) Function management method and device based on engine, electronic equipment and storage medium
CN115168358A (en) Database access method and device, electronic equipment and storage medium
CN114661402A (en) Interface rendering method and device, electronic equipment and computer readable medium
CN113722037A (en) User interface refreshing method and device, electronic equipment and storage medium
CN114168151A (en) Container-based program compiling method and device, electronic equipment and storage medium
CN114020364A (en) Sensor device adapting method and device, electronic device and storage medium
EP4105775A2 (en) Method, system and electronic device for the production of artificial intelligence models
CN112632293B (en) Industry map construction method and device, electronic equipment and storage medium
CN118227451A (en) Fuzzy test system, fuzzy test method, electronic device and storage medium
CN117215955A (en) Code coverage rate acquisition method and device and electronic equipment
CN118276975A (en) Plug-in management method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination