CN109408500B - Artificial intelligence operation platform - Google Patents

Artificial intelligence operation platform Download PDF

Info

Publication number
CN109408500B
CN109408500B CN201811316780.5A CN201811316780A CN109408500B CN 109408500 B CN109408500 B CN 109408500B CN 201811316780 A CN201811316780 A CN 201811316780A CN 109408500 B CN109408500 B CN 109408500B
Authority
CN
China
Prior art keywords
data
intelligent
data structure
artificial intelligence
operation platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811316780.5A
Other languages
Chinese (zh)
Other versions
CN109408500A (en
Inventor
王柯
戚骁亚
刘旭
李梦炜
刘建都
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deep Singularity Technology Co ltd
Original Assignee
Beijing Deep Singularity Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deep Singularity Technology Co ltd filed Critical Beijing Deep Singularity Technology Co ltd
Priority to CN201811316780.5A priority Critical patent/CN109408500B/en
Publication of CN109408500A publication Critical patent/CN109408500A/en
Application granted granted Critical
Publication of CN109408500B publication Critical patent/CN109408500B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)
  • Stored Programmes (AREA)

Abstract

The invention relates to an artificial intelligence operation platform, comprising: a hardware layer, a system layer, a software interface layer and an application layer; the hardware layer adopts embedded hardware, and comprises: the memory is connected with the CPU; the system layer adopts a linux customization system, and the application layer comprises: the intelligent client is used for realizing data interaction between the running platform and external intelligent equipment; the invention can screen and process large-scale data, only transmits valuable information to the cloud server through the network or directly returns the result required by the user, so that the concept of data processing nearby can realize the storage of network bandwidth and a data center, the computing resource can be greatly saved, and the operation efficiency is improved.

Description

Artificial intelligence operation platform
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to an artificial intelligence operation platform.
Background
The distributed artificial intelligence cloud platform adopts a structure combining a cloud, a network and a terminal. The data processing unit (namely, the brain) of the whole system is positioned in a cloud server, deep learning and training are carried out through a large-scale neural network of the cloud, so that a high intelligent level is obtained, and the cloud can think and analyze like the human brain; the data transmission of the system is realized through data transmission neural networks, and the transmission networks are used for transmitting signals and instructions between the brain and the body like neural networks in the human body; the body of the distributed cloud platform is an intelligent operation platform, is directly connected with an execution unit and an acquisition unit of the equipment, and is responsible for instruction execution and information acquisition.
In the related art, taking an unmanned automobile as an example, it is a moving "object" and needs sufficient local data processing capability, i.e., terminal-side artificial intelligence. At the same time, it also needs to acquire a strong processing power from the network, and needs to ensure high reliability and low delay (environment). Large-scale data can be generated in the cloud server, but the large-scale data has data which is not required to be calculated or has no value, so that network bandwidth is occupied, storage of a data center in the cloud server is wasted, calculation resources are wasted, and operation efficiency is reduced.
Disclosure of Invention
In view of this, the present invention provides an artificial intelligence operation platform to overcome the defects in the prior art, so as to solve the problems in the prior art that large-scale data occupies network bandwidth, occupies storage, wastes computing resources, and reduces operation efficiency.
In order to achieve the purpose, the invention adopts the following technical scheme: an artificial intelligence operation platform, comprising: a hardware layer, a system layer, a software interface layer and an application layer;
the hardware layer adopts embedded hardware, and comprises: the memory is connected with the CPU;
the system layer adopts a linux customization system and is used for realizing the customization of the operation platform;
the software interface layer includes: the system comprises a middleware, an ROS system, a data transmission interface and a general functional module;
the middleware is used for converting the received different data into a unified data structure;
the ROS system is used for realizing data acquisition of various hardware interfaces and unification of transmitting interfaces;
the data transmission interface is used for realizing data transmission between the operation platform and the cloud server;
the general function module is used for providing specific implementation of general functions;
the application layer comprises: and the intelligent client is used for realizing data interaction between the running platform and external intelligent equipment.
Further, the embedded hardware further includes: GPU and storage hard disk;
the GPU is used for high-performance parallel computing;
the storage hard disk is used for storing various types of software and data.
Further, the application layer further includes:
the intelligent model is used for finishing the operation of the AI algorithm;
and the intelligent model is connected with external intelligent equipment through the intelligent client.
Further, the middleware adopts:
deep neural network middleware.
Further, the linux customizing system adopts:
a read-only file system based on squarhfs.
Further, the read-only file system comprises a read-only partition, an encryption partition and a writable partition;
the read-only partition is used for storing the intelligent client and the intelligent model;
the encryption partition is used for storing key models and parameter files required by the operation of the intelligent model;
the writable partition is used for storing data generated by the intelligent client and the intelligent model.
The embodiment of the application provides a working method of an artificial intelligence operation platform, which comprises the following steps:
acquiring different data of the intelligent equipment;
converting the received different data into a unified data structure;
calculating the unified data structure or transmitting the unified data structure to a cloud server and acquiring a calculation result;
and transmitting the calculated result to the intelligent equipment.
Further, the converting the received different data into a unified data structure includes:
the middleware converts the received different data into a unified data structure;
converting the data structure into an input data structure of different platforms;
calling forward prediction interfaces of different platforms;
obtaining forward prediction results of different platforms;
and converting the prediction result into a uniform data structure.
Further, the data structure comprises a storage address of the data, an effective number of the data, a data dimension and a size of the data dimension.
Further, the operation platform is used for sending and receiving data to the cloud server,
the transmitted data includes:
data information which cannot be calculated by the user, data information fed back by the user and data required by intelligent model training;
the received data includes:
and the execution result of the intelligent model is sent back by the cloud server.
By adopting the technical scheme, the invention can achieve the following beneficial effects:
the artificial intelligence operation platform can screen and process large-scale data, valuable information is only transmitted to the cloud server through a network or a result required by a user is directly returned, so that the concept of data processing nearby can realize the storage of network bandwidth and a data center, computing resources are greatly saved, and the operation efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic structural diagram of an artificial intelligence operation platform according to the present invention;
FIG. 2 is a schematic structural diagram of an artificial intelligence operation platform according to the present invention;
FIG. 3 is a diagram of the steps of a working method of an artificial intelligence operation platform according to the present invention;
FIG. 4 is a diagram illustrating steps of a method for operating an artificial intelligence operation platform according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
A specific artificial intelligence operation platform provided in the embodiment of the present application is described below with reference to the accompanying drawings.
As shown in fig. 1, the artificial intelligence operation platform provided in the embodiment of the present application includes: a hardware layer, a system layer, a software interface layer and an application layer;
the hardware layer adopts embedded hardware, and comprises: the memory is connected with the CPU;
the system layer adopts a linux customization system and is used for realizing the customization of the operation platform;
the software interface layer includes: the system comprises a middleware, an ROS system, a data transmission interface and a general functional module;
the middleware is used for converting the received different data into a unified data structure;
the ROS system is used for realizing data acquisition of various hardware interfaces and unification of transmitting interfaces;
the data transmission interface is used for realizing data transmission between the operation platform and the cloud server;
the general function module is used for providing specific implementation of general functions;
the application layer comprises: and the intelligent client is used for realizing data interaction between the running platform and external intelligent equipment.
The working principle of the artificial intelligence operation platform is as follows: the artificial intelligence operation platform is arranged between the intelligent equipment and the cloud server, wherein the intelligent equipment can be processing equipment, a robot, an AGV and the like, the artificial intelligence operation platform can receive data of the intelligent equipment and calculate to obtain a calculation result to return to the intelligent equipment, so that the intelligent equipment executes corresponding actions, when the calculation capacity of the artificial intelligence operation platform is low and the data cannot be calculated, the artificial intelligence operation platform sends the data which cannot be calculated to the cloud server, the cloud server calculates the data and sends the calculation result to the artificial intelligence operation platform, the artificial intelligence operation platform sends the data to the intelligent equipment, and the intelligent equipment executes corresponding actions.
In the application, the artificial intelligence operation platform comprises a hardware layer, a system layer, a software interface layer and an application layer;
the hardware layer adopts embedded hardware, the embedded hardware supports two architectures of x86 and arm, the embedded hardware comprises a memory and a CPU, the CPU has functions of data acquisition and data transmission, the embedded hardware can store a linux customized system, and the embedded hardware has an internal storage medium of at least 16GB or can be externally connected with a storage medium of at least 8 GB; the system in the application adopts a linux customization system; the middleware converts received different data into a unified data structure, the ROS system realizes the unification of data acquisition and sending interfaces of various hardware interfaces, the data transmission interface realizes the data transmission between the operation platform and the cloud server, the universal function module is responsible for providing the specific realization of the universal function, and the intelligent client realizes the data interaction between the operation platform and the external intelligent equipment. The operation platform adopts a layered design and is suitable for an intelligent operation platform structure in the field of artificial intelligence.
In some embodiments, as shown in fig. 2, the embedded hardware further comprises:
GPU and storage hard disk.
The GPU is used for high-performance parallel computing;
the storage hard disk is used for storing various types of software and data.
Specifically, the GPU has a relatively high computational power and can implement relatively complex data processing.
In some embodiments, as shown in fig. 2, the application layer further includes:
the intelligent model is used for finishing the operation of the AI algorithm;
and the intelligent model is connected with external intelligent equipment through the intelligent client.
Specifically, the intelligent model is arranged in an application layer of the artificial intelligence operation platform, an AI algorithm is preset in the intelligent model, the intelligent model can complete data calculation, and when GPU (graphics processing unit) which is hardware with strong calculation capability exists in the artificial intelligence operation platform, the intelligent model can complete data processing with complex calculation. Specifically, the intelligent client sends various data acquired from external intelligent equipment to an intelligent model on an intelligent operation platform to complete the calculation of an intelligent algorithm; meanwhile, the calculation result of the intelligent model is received, and the control equipment executes corresponding action.
In some embodiments, the cloud server comprises:
and the intelligent model is connected with the intelligent client.
Specifically, the intelligent model can be arranged in the cloud server, at the moment, the artificial intelligence operation platform does not contain a GPU, and specifically, the intelligent client sends various data acquired from external intelligent equipment to the intelligent model to complete the calculation of the intelligent algorithm; meanwhile, the calculation result of the intelligent model is received, and the control equipment executes corresponding action.
In the application, the intelligent client and the intelligent model are independent from each other, so that data interaction and an intelligent algorithm are independent from each other.
The middleware adopts a deep neural network middleware, namely a DNN middleware, wherein the DNN middleware supports a plurality of deep learning platforms, including tenserflow, caffe2 and mxnet, different platforms respectively define data structures, prediction and training interfaces, different interfaces are not beneficial to data communication and later maintenance between a cloud server node and an artificial intelligence operation platform, the data structures and interfaces between different platforms need to be unified, the DNN middleware supports two computing architectures of a CPU and a GPU, unified data preprocessing and forward prediction are completed, wherein the data preprocessing completes conversion of various different data (such as images, audios, videos and the like) to the unified data structures, the forward prediction completes unification of forward prediction interfaces of different platforms, and the data structures of output results are unified. Finally, a unified data structure simple _ pointer is realized, and the DNN middleware interface comprises:
(1) the data preprocessing interface is called by an application program to finish the preprocessing of data;
inputting: data such as images, videos and audios received by the intelligent operation platform and data dimension information;
and (3) outputting: unifying data of a data structure simple _ tensor;
(2) initializing an interface, and completing the loading of a neural network model and parameters;
inputting: storing paths of neural network model files and parameter files of different AI platforms;
(3) the forward prediction interface is used for completing forward prediction of the corresponding intelligent model;
inputting: the data structure is unified into neural network input data of simple _ sensor;
and (3) outputting: the data structure is unified as the neural network output data of the simple _ sensor.
In some embodiments, the cloud server comprises:
servers within a region (not shown), a cluster of core servers (not shown);
one end of the server in the region is respectively connected with the intelligent client and the intelligent model, and the other end of the server in the region is connected with the core server cluster.
Preferably, the linux customizing system adopts:
a read-only file system based on squarhfs. The safety of the operating system is guaranteed, and meanwhile, the system is not damaged when the operating platform is powered on and powered off frequently.
Specifically, the artificial intelligence operation platform provided by the application is connected with the cloud server through the internet access and is connected with the intelligent equipment through various data interfaces; and the operation platform has certain computing power, provides a uniform interface aiming at the intelligent models obtained by training different platforms, and completes the real-time execution of the specific models. Specific task types and characteristic data are forwarded to a cloud server in real time aiming at intelligent model calculation tasks which are high in complexity and cannot be independently completed, and results returned by the cloud server are received. In addition, the operation platform also has the online transmission capability of intelligent evolution data, and when the equipment is idle, the acquired data is sent to the cloud server to support the training of the intelligent model. And when the calculation deviation of the intelligent model is clear, the data is sent to the cloud server in time.
In some embodiments, the read-only file system includes a read-only partition, an encrypted partition, and a writable partition.
The read-only partition is used for storing the intelligent client and the intelligent model;
the encryption partition is used for storing key models and parameter files required by the operation of the intelligent model;
the writable partition is used for storing data generated by the intelligent client and the intelligent model.
Specifically, the operating system uses a debian-based customization system, which is designed as follows: a) the operating system is a read-only system, and a squashfs file system is used, so that the system is not damaged when the running platform is powered on and off frequently; b) the partition where the intelligent model and other application programs are located is a read-only partition, so that the system is not damaged when the running platform is powered on and powered off frequently, and meanwhile, the partition can be mounted as a writable partition to finish the upgrading of the intelligent model; c) writable partitions exist in the system and store various data generated in the running process of the application program; d) when the system contains the content needing encryption, such as important model files, key parameter files and the like, the partition in the system can be mounted as an encryption partition.
As shown in fig. 3, the present application provides a working method of an artificial intelligence operation platform, including:
s1, acquiring different data of the intelligent equipment to form a data structure;
s2, converting the received different data into a unified data structure;
s3, calculating the unified data structure or transmitting the unified data structure to a cloud server, and obtaining a calculation result after calculation is completed;
and S4, transmitting the calculated result to the intelligent device.
Preferably, as shown in fig. 4, the converting the received different data into a unified data structure includes:
s21, converting the received different data into a unified data structure by the middleware;
s22, converting the data structure into input data structures of different platforms;
s23, calling forward prediction interfaces of different platforms;
s24, obtaining forward prediction results of different platforms;
and S25, converting the prediction result into a uniform data structure.
Specifically, a unified data structure simple _ pointer.
Preferably, the data structure includes a storage address of the data, an effective number of the data, a data dimension, and a size of the data dimension.
Preferably, the operation platform is used for sending and receiving data to the cloud server,
the transmitted data includes:
data information which cannot be calculated by the user, data information fed back by the user and data required by intelligent model training;
the received data includes:
and the execution result of the intelligent model is sent back by the cloud server.
The format of the information sent by the artificial intelligence operation platform to the cloud server is shown in table 1,
Figure BDA0001855357140000091
TABLE 1
Preferably, the different data includes:
the intelligent model input layer data, image data, video data, audio data, binary data, error execution information data and user feedback information data.
Specifically, the data of the intelligent model input layer can be unified into a data structure, so that the work to be completed comprises unification of the data of the input layer of different depth learning platforms, and the supported depth learning platforms comprise tenserflow, caffe2 and mxnet.
The intelligent model input layer data is shown in table 2:
Figure BDA0001855357140000092
Figure BDA0001855357140000101
TABLE 2
The format of the data blocks in other formats is shown in table 3:
Figure BDA0001855357140000102
TABLE 3
The information sent by the cloud server to the intelligent operation platform is only the intelligent model execution result sent back by the cloud server. The data format of the intelligent model execution result is shown in table 4:
Figure BDA0001855357140000103
TABLE 4
The beneficial effects of this application technical scheme include:
(1) the application designs a whole set of intelligent operation platform architecture which is designed in a layered mode and is suitable for the field of artificial intelligence;
(2) the application uses the read-only squashfs file system, so that the safety of the operating system is ensured;
(3) the intelligent client and the intelligent model are independent from each other, so that data interaction and an intelligent algorithm are independent from each other;
(4) the application designs an independent DNN middleware interface, so that the intelligent operation platform is more universal and easy to use;
(5) the application is provided with a unified communication interface with the cloud server, and information interaction between the cloud server and the cloud server is guaranteed.
To sum up, the application provides an artificial intelligence operation platform, includes: a hardware layer, a system layer, a software interface layer and an application layer; according to the method and the system, the artificial intelligence operation platform is arranged between the intelligent equipment and the cloud server, so that large-scale data generated by the intelligent equipment can be screened and processed on the artificial intelligence operation platform, valuable information is only transmitted to the cloud server through a network or a result required by a user is directly returned, the network bandwidth and the data center can be stored by the aid of the concept of data processing nearby, computing resources are greatly saved, and operation efficiency is improved.
It can be understood that the embodiment of the method provided above corresponds to the embodiment of the artificial intelligence operation platform, and corresponding specific contents may be referred to each other, which is not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a CPU of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the CPU of the computer or other programmable data processing apparatus, create an artificial intelligence operating platform for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction artificial intelligence execution platform which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (9)

1. An artificial intelligence operation platform, comprising: a hardware layer, a system layer, a software interface layer and an application layer;
the hardware layer adopts embedded hardware, and comprises: the memory is connected with the CPU;
the system layer adopts a linux customization system and is used for realizing the customization of the operation platform;
the software interface layer includes: the system comprises a middleware, an ROS system, a data transmission interface and a general functional module;
the middleware is used for converting the received different data into a unified data structure;
the converting the received different data into a unified data structure includes:
the middleware converts the received different data into a unified data structure;
converting the data structure into an input data structure of different platforms;
calling forward prediction interfaces of different platforms;
obtaining forward prediction results of different platforms;
converting the prediction result into a uniform data structure;
the ROS system is used for realizing data acquisition of various hardware interfaces and unification of transmitting interfaces;
the data transmission interface is used for realizing data transmission between the operation platform and the cloud server;
the general function module is used for providing specific implementation of general functions;
the application layer comprises: and the intelligent client is used for realizing data interaction between the running platform and external intelligent equipment.
2. The artificial intelligence runtime platform of claim 1, the embedded hardware further comprising: GPU and storage hard disk;
the GPU is used for high-performance parallel computing;
the storage hard disk is used for storing various types of software and data.
3. The artificial intelligence runtime platform of claim 2, wherein the application layer further comprises:
the intelligent model is used for finishing the operation of the AI algorithm;
and the intelligent model is connected with external intelligent equipment through the intelligent client.
4. The artificial intelligence operation platform of claim 1, wherein the middleware employs:
deep neural network middleware.
5. The artificial intelligence operation platform of claim 3, wherein the linux customization system employs:
a read-only file system based on squarhfs.
6. The artificial intelligence operation platform of claim 5,
the read-only file system comprises a read-only partition, an encryption partition and a writable partition;
the read-only partition is used for storing the intelligent client and the intelligent model;
the encryption partition is used for storing key models and parameter files required by the operation of the intelligent model;
the writable partition is used for storing data generated by the intelligent client and the intelligent model.
7. The working method of the artificial intelligence operation platform based on any one of claims 1 to 6 is characterized by comprising the following steps:
acquiring different data of the intelligent equipment;
converting the received different data into a unified data structure;
calculating the unified data structure or transmitting the unified data structure to a cloud server and acquiring a calculation result;
transmitting the calculated result to the intelligent equipment;
the converting the received different data into a unified data structure includes:
the middleware converts the received different data into a unified data structure;
converting the data structure into an input data structure of different platforms;
calling forward prediction interfaces of different platforms;
obtaining forward prediction results of different platforms;
and converting the prediction result into a uniform data structure.
8. The method of operation of claim 7, wherein:
the data structure comprises a data storage address, an effective number of data, a data dimension and a size of the data dimension.
9. The method of claim 7, wherein the runtime platform is configured to send and receive data to a cloud server,
the transmitted data includes:
data information which cannot be calculated by the user, data information fed back by the user and data required by intelligent model training;
the received data includes:
and the execution result of the intelligent model is sent back by the cloud server.
CN201811316780.5A 2018-11-06 2018-11-06 Artificial intelligence operation platform Active CN109408500B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811316780.5A CN109408500B (en) 2018-11-06 2018-11-06 Artificial intelligence operation platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811316780.5A CN109408500B (en) 2018-11-06 2018-11-06 Artificial intelligence operation platform

Publications (2)

Publication Number Publication Date
CN109408500A CN109408500A (en) 2019-03-01
CN109408500B true CN109408500B (en) 2020-11-17

Family

ID=65472054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811316780.5A Active CN109408500B (en) 2018-11-06 2018-11-06 Artificial intelligence operation platform

Country Status (1)

Country Link
CN (1) CN109408500B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832738B (en) 2019-04-18 2024-01-09 中科寒武纪科技股份有限公司 Data processing method and related product
US11934940B2 (en) 2019-04-18 2024-03-19 Cambricon Technologies Corporation Limited AI processor simulation
CN110119002A (en) * 2019-04-28 2019-08-13 武汉企鹅能源数据有限公司 Meteorological AI platform based on big data
CN110502213A (en) * 2019-05-24 2019-11-26 网思科技股份有限公司 A kind of artificial intelligence capability development platform
CN111104459A (en) * 2019-08-22 2020-05-05 华为技术有限公司 Storage device, distributed storage system, and data processing method
CN114902250A (en) * 2019-12-31 2022-08-12 亚信科技(中国)有限公司 AI intelligence injection based on signaling interaction
CN111427687A (en) * 2020-03-23 2020-07-17 深圳市中盛瑞达科技有限公司 Artificial intelligence cloud platform
CN111464422B (en) * 2020-03-27 2022-01-07 京东科技信息技术有限公司 Interaction method, interaction device, electronic equipment and storage medium
CN111694781A (en) * 2020-04-21 2020-09-22 恒信大友(北京)科技有限公司 ARM main control board based on data acquisition system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704252A (en) * 2017-10-20 2018-02-16 北京百悟科技有限公司 A kind of method and system for providing a user artificial intelligence platform
CN108353090A (en) * 2015-08-27 2018-07-31 雾角***公司 Edge intelligence platform and internet of things sensors streaming system
CN108427992A (en) * 2018-03-16 2018-08-21 济南飞象信息科技有限公司 A kind of machine learning training system and method based on edge cloud computing
CN108667850A (en) * 2018-05-21 2018-10-16 济南浪潮高新科技投资发展有限公司 A kind of artificial intelligence service system and its method for realizing artificial intelligence service

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10311376B2 (en) * 2015-06-20 2019-06-04 Quantiply Corporation System and method for creating biologically based enterprise data genome to predict and recommend enterprise performance
US10049108B2 (en) * 2016-12-09 2018-08-14 International Business Machines Corporation Identification and translation of idioms
CN108733793B (en) * 2018-05-14 2019-12-10 北京大学 Ontology model construction method and system for relational database

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353090A (en) * 2015-08-27 2018-07-31 雾角***公司 Edge intelligence platform and internet of things sensors streaming system
CN107704252A (en) * 2017-10-20 2018-02-16 北京百悟科技有限公司 A kind of method and system for providing a user artificial intelligence platform
CN108427992A (en) * 2018-03-16 2018-08-21 济南飞象信息科技有限公司 A kind of machine learning training system and method based on edge cloud computing
CN108667850A (en) * 2018-05-21 2018-10-16 济南浪潮高新科技投资发展有限公司 A kind of artificial intelligence service system and its method for realizing artificial intelligence service

Also Published As

Publication number Publication date
CN109408500A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN109408500B (en) Artificial intelligence operation platform
CN110780914B (en) Service publishing method and device
US11501160B2 (en) Cloud computing data compression for allreduce in deep learning
Hu et al. Cloudroid: A cloud framework for transparent and QoS-aware robotic computation outsourcing
CN111429142B (en) Data processing method and device and computer readable storage medium
CN114327399A (en) Distributed training method, apparatus, computer device, storage medium and product
Gand et al. Serverless container cluster management for lightweight edge clouds
CN114995994A (en) Task processing method and system
EP4222598A1 (en) Optimizing job runtimes via prediction-based token allocation
Trunov et al. Container cluster model development for legacy applications integration in scientific software system
Campeanu et al. Component allocation optimization for heterogeneous CPU-GPU embedded systems
CN109491956B (en) Heterogeneous collaborative computing system
Chao et al. Ecosystem of things: Hardware, software, and architecture
CN112235419B (en) Robot cloud platform execution engine and execution method based on behavior tree
CN110232338A (en) Lightweight Web AR recognition methods and system based on binary neural network
Yun et al. Towards a cloud robotics platform for distributed visual slam
CN116909748A (en) Computing power resource allocation method and device, electronic equipment and storage medium
CN112565404A (en) Data processing method, edge server, center server and medium
US11467835B1 (en) Framework integration for instance-attachable accelerator
Rosendo et al. ProvLight: Efficient workflow provenance capture on the edge-to-cloud continuum
CN113954679B (en) Edge control equipment applied to ordered charging control of electric automobile
US20210357250A1 (en) Processing files via edge computing device
Westerlund et al. A generalized scalable software architecture for analyzing temporally structured big data in the cloud
CN112817581A (en) Lightweight intelligent service construction and operation support method
CN111427687A (en) Artificial intelligence cloud platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant