CN110769037A - Resource allocation method for embedded edge computing platform - Google Patents

Resource allocation method for embedded edge computing platform Download PDF

Info

Publication number
CN110769037A
CN110769037A CN201910929156.0A CN201910929156A CN110769037A CN 110769037 A CN110769037 A CN 110769037A CN 201910929156 A CN201910929156 A CN 201910929156A CN 110769037 A CN110769037 A CN 110769037A
Authority
CN
China
Prior art keywords
data
letter
graph
calculation
computing platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910929156.0A
Other languages
Chinese (zh)
Other versions
CN110769037B (en
Inventor
林勤
潘灵
钟瑜
贾明权
刘红伟
张昊
吴明钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 10 Research Institute
Southwest Electronic Technology Institute No 10 Institute of Cetc
Original Assignee
Southwest Electronic Technology Institute No 10 Institute of Cetc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Electronic Technology Institute No 10 Institute of Cetc filed Critical Southwest Electronic Technology Institute No 10 Institute of Cetc
Priority to CN201910929156.0A priority Critical patent/CN110769037B/en
Publication of CN110769037A publication Critical patent/CN110769037A/en
Application granted granted Critical
Publication of CN110769037B publication Critical patent/CN110769037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a resource allocation method for an embedded edge computing platform, and aims to provide a simple, reliable and rapidly deployable distributed computing resource allocation method for the embedded edge computing platform. The invention is realized by the following technical scheme: the Json files are arranged by taking the Json files as carriers according to data propagation paths and node calculation operations in a calculation graph, and calculation operation programs needing to be additionally supplemented are packed into a compressed file; then, a calculation task (namely a Json file packet) is injected into the embedded edge calculation platform, a Json file is automatically analyzed, a calculation graph is restored, and the requirements of calculation and communication resources are decomposed according to the calculation graph to form a mapping graph; finally, the mapping graph is used for completing the deployment of the computing requirement to the hardware resource; if the mapping is not successful, the decomposition mode of the calculation graph can be adjusted according to the reason of failure feedback, and the new mapping graph is formed and then is mapped again.

Description

Resource allocation method for embedded edge computing platform
Technical Field
The invention relates to a distributed computing resource configuration method for an embedded edge computing platform.
Background
In recent years, with the increasing capabilities of sensors and end devices, the demand for diversification of data volume and data processing has increased explosively. Especially, new applications such as the internet of things, automatic driving, virtual reality and the like are continuously emerging, the traditional cloud-based big data processing and artificial intelligence calculation cannot well solve the existing problems, and therefore a new calculation model called edge calculation is generated. Edge computing is performed at the edge of the network, i.e., near the source of data generation, using network edge nodes to process and analyze data, providing near-end computing services.
Edge computing is becoming increasingly important as part of the industrial internet of things. Due to the resource limitation of the terminal of the internet of things, the traditional mode needs to provide services for users by means of remote cloud computing resources, and if all data of the terminal are transmitted to the cloud center to be processed uniformly and then returned to the terminal, huge pressure is brought to a network link and the data center, the cloud center is easily overloaded to reject the services, and the experience of the terminal users is influenced. Therefore, according to the concept and application practice of hierarchical and layered computing, an edge computing mode is gradually proposed, and computing services are provided in a user adjacent area, so that the resource pressure of a network and a cloud computing center is reduced. The edge computing is not used for replacing cloud computing, but is used for expanding the cloud computing, and a better computing platform is provided for the Internet of things. The architecture of the edge computing is a three-layer model of end equipment-edge-cloud, and the three layers can provide resources and services for the application. The proximity between the edge layer and the device layer includes two layers. First, the logical proximity represents the number of route hops between the edge layer infrastructure and the end device, and a larger number of hops represents a greater chance of congestion in the route, and a greater likelihood of delay increase. Second, physical proximity is referred to, which depends on the physical distance of the terminal device from the edge layer and the capabilities of the edge computing device.
In edge computing, a large amount of data is generated by an end device, and a large amount of data processing services need to be provided by an edge server (also referred to as an edge computing platform). Therefore, how to dynamically schedule these data and processes to the appropriate computing service providers, i.e., computing nodes, according to the performance of the edge computing platform and the network conditions is one of the core problems in edge computing. Meanwhile, due to the large dynamic characteristics of computing devices and computing service requests in the edge computing environment, the computing devices are dynamically registered and revoked due to user switching and the like, the computing service requests may be initiated or terminated at any time, and the computing services of the edge platform generally need to be migrated and started or stopped. Therefore, realizing the rapid configuration of the edge platform computing service is also a core problem in edge computing.
The embedded system is based on computer technology and has available software and hardware, and is suitable for use in special computer system with application system with high requirement on function, reliability, cost, volume and power consumption. Currently, the mainstream embedded microprocessor includes advanced reduced instruction set machine arm (advanced RISC machine), digital Signal processor dsp (digital Signal processor), field Programmable gate array fpga (field Programmable gate array), and the like. Due to the excellent characteristics of small volume, low power consumption, easy customization of form, portability and the like, the embedded system is very suitable for serving as an edge computing platform to provide edge computing service. However, no implementation standard is established at present for the core problems. Compared with a cloud computing center, the edge computing platform has limited resources such as computing, communication, storage, power supply and the like, and is not suitable for heavy system services, so that a lightweight resource configuration method suitable for the edge computing platform is urgently needed.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a resource allocation method for an embedded edge computing platform, which is simple, reliable, low in complexity and capable of being deployed quickly, for meeting the resource allocation requirement of the embedded edge computing platform.
The above object of the present invention is achieved by the following means. A resource allocation method for an embedded edge computing platform is characterized by comprising the following steps: firstly, arranging a computation graph of a mapping-reduction computation model (MapReduce) by using a scheduler with a Json file as a carrier, counting data propagation paths and node computation operations of the MapReduce computation graph with letter occurrence frequency in a distributed mode, arranging the Json file, and packaging computation operation programs needing additional supplement into a compressed file; and then, injecting a calculation task into the embedded edge calculation platform, automatically analyzing the Json file by the edge calculation platform, recovering a calculation graph, and decomposing the requirements of calculation and communication resources according to the calculation graph to form a mapping graph. Dividing a plurality of chained operations with single input and single output into the same operation group during decomposition of the computation graph, ensuring that the computation of the same operation group is distributed into the same computation node during resource mapping, reducing communication transmission among the computation nodes, and sequentially connecting different operation groups according to a principle of being close to a data source, so as to shorten a propagation path of data among distributed computation resources; finally, the mapping graph is used for completing the deployment of the computing requirement to the hardware resource; and if the mapping is not successful, adjusting the decomposition mode of the calculation map according to the reason of failure feedback, forming a new mapping map, and then carrying out re-mapping.
Compared with the prior art, the method has the beneficial effects that:
simple, reliable and light. The method uses the Json file as a carrier to arrange the MapReduce calculation graph, the agreed arrangement format can simply and clearly describe the data propagation path and node operation of the MapReduce calculation graph, and the platform recovers the calculation task by analyzing the Json file. Among them, Json is a flexible and lightweight data exchange format that stores and represents data in a text format completely independent of the programming language, based on a subset of ECMAScript (JS (Java description language, JavaScript) specification by the european computer association), whose grammatical rules resemble the habits of the C language family. The simple and clear hierarchical structure of Json is easy for human reading and writing and easy for machine analysis and generation. Therefore, the platform computing task is injected by adopting a text format completely independent of a language without the support of a heavyweight language environment, such as: compare Python.
Low complexity and quick deployment. The invention comprehensively considers the requirements of edge computing application on delay and bandwidth and the characteristics of an embedded edge computing platform, so that: 1) dividing a plurality of chained operations with single input and single output into the same operation group during decomposition of the computation graph, ensuring that the computation of the same operation group is distributed into the same computation node during resource mapping, reducing communication transmission among the computation nodes, and 2) connecting different operation groups in sequence according to a principle of being close to a data source, thereby shortening a propagation path of data among distributed computation resources. The two measures are low in complexity, simple and effective, not only are the complex calculation process of resource optimization scheme planning avoided during task deployment, but also the requirements of the tasks on time delay are considered, and the rapid deployment of the tasks on the edge calculation platform can be realized.
The method is suitable for distributed resource rapid deployment of the embedded edge computing platform, is simple and reliable, has low cost of environment and resource requirements, and has extremely high engineering application value.
Drawings
The invention is further explained below with reference to the drawings and the real-time examples.
FIG. 1 is a flow chart of resource allocation for an embedded edge computing platform according to the present invention.
FIG. 2 is a diagram of MapReduce calculations for contract formatting for Json documentation.
Fig. 3 is a MapReduce map of fig. 2.
FIG. 4 is a diagram of an embodiment of a mapping of a map to embedded hardware resources.
Detailed Description
The method is further described with reference to the figures and the detailed description.
See fig. 1. According to the method, firstly, an application scheduler schedules a computation graph of a mapping-reduction computation model (MapReduce) by taking a Json file as a carrier, counts data propagation paths and node computation operations of the MapReduce computation graph with letter occurrence frequency according to a distributed mode, schedules the Json file, and packages computation operation programs needing additional supplement into a compressed file; and then, injecting a calculation task into the embedded edge calculation platform, automatically analyzing the Json file by the edge calculation platform, recovering a calculation graph, and decomposing the requirements of calculation and communication resources according to the calculation graph to form a mapping graph. Dividing a plurality of chained operations with single input and single output into the same operation group during decomposition of the computation graph, ensuring that the computation of the same operation group is distributed into the same computation node during resource mapping, reducing communication transmission among the computation nodes, and sequentially connecting different operation groups according to a principle of being close to a data source, so as to shorten a propagation path of data among distributed computation resources; finally, the mapping graph is used for completing the deployment of the computing requirement to the hardware resource; and if the mapping is not successful, adjusting the decomposition mode of the calculation map according to the reason of failure feedback, forming a new mapping map, and then carrying out re-mapping.
The first step of packaging refers to packaging the Json file and an executable file of the computing platform which is not supported by the function of the computing platform and needs additional supplementary computing operation together to form a compressed file. The "executables" (executable) item in the Json file describes the complementary computing operation; "ops" (operations) describe the invocation of operations.
In the embodiment depicted below, the map-reduce computation model (MapReduce) is a computation graph that counts the occurrences of letters (i.e., letter count, LC) in a distributed manner, and the computation graph includes: data sources (i.e., alphabetic sources, letter sources, LC), data distribution operations (i.e., polling, RoundRobin, RR), alphabetic splitting operations (i.e., alphabetic splitting, letter split, LS) and alphabetic per-class aggregation operations (i.e., alphabetic partitioning, letter partition, LP) for 3 data processing channels, and Reduce operations (i.e., alphabetic reduction, letter Reduce, LR) and outputs (i.e., alphabetic output, letter output, LO). A large number of letters are randomly generated by a data source or are input by adopting an external file, a data distribution operation sends data streams input by the data source to 3 data processing channels in a polling mode, the 3 data processing channels perform distributed Map operations (namely, LetterSplit and LetterPartion) on the received data, and finally, the distributed Map operations are converged to LetterReduce and then result output LetterOutput.
The application programmers can perform one-to-three operations according to the layout format of the Json file to expand the computation graph. Taking the example of the letter count, the convention format for the Json file layout is as follows:
Figure BDA0002219814180000051
the Json file describes input, output, data propagation paths, calculation operation functions of each node on the paths and relevant parameters of the LC embodiment calculation graph.
See fig. 2. The LC embodiment adopts a directed acyclic computation graph constructed by a mapping-reduction MapReduce computation model, arrows in the computation graph represent data propagation paths, and nodes, namely blocks, represent computation operations on data. MapReduce is a distributed computing model, and the core idea is to cut large data to be processed into a plurality of data fragments, each data fragment is subjected to parallel processing, namely Map (mapping), on different computing nodes of a distributed cluster, and the data fragments are merged after the processing is completed, so that a processing result, namely reduction (Reduce), is output. MapReduce provides a programming model and a method which can run on tens, hundreds or even thousands of distributed computing nodes to implement parallel computing, greatly reduces the threshold of application arrangement personnel for the requirement of basic knowledge of parallel computing, and improves the speed of application arrangement development.
In the MapReduce calculation diagram of the LC embodiment, letter source is a data source and randomly generates or reads a large number of letters from an external file; data distribution operation RoundRobin executes data distribution operation, divides a large number of letters input from a letter Source into data fragments according to time periods, and sends 3 data processing channels of a subsequent Map stage in a polling mode; the 3 data processing channels in the Map stage respectively perform Map operations on the received data fragments in parallel, namely letter splitting operation (letter split) and letter collecting operation (letter partition); and after the Map stage processing is finished, the processing results of the 3 data processing channels are converged and input into a letter Reduce to execute Reduce operation and then output (letter output).
See fig. 3. After the Json file is analyzed and restored to the calculation graph by the embedded edge calculation platform, the resource requirements of the tasks need to be decomposed, so that the mapping from the calculation resource requirements to the hardware is completed. The process is an optimization process for resource configuration, and the basic computing capacity and the inter-node communication are two major core problems of embedded resource configuration optimization. With the communication cost and the real-time requirement (i.e. the delay is minimum) as the optimization target, the invention adopts a group allocation mode to divide the computing resource requirement, namely, a plurality of chained operations with single input and single output can be divided into the same operation group, such as LS0 and LS1 in the figure. In resource allocation, the invention allocates the resources to the same computing node according to the rule that the operation groups are allocated, and when the chain of the chain operation is too long for the computing node, the chain needs to be chopped to form a plurality of operation groups.
In the figure, according to a single-in single-out principle, searching is carried out from left to right, and LS0 and LP0, LS1 and LP1, LS2 and LP2 are divided into 3 operation groups, so that communication in the operation groups is guaranteed to be communication in the computing nodes, communication efficiency is improved, and time delay is reduced.
See fig. 4. A multi-Core DSP (Digital Signal Processor) of a Texas instrument and an advanced reduced instruction set machine ARM (advanced RISC machine) chip 66ak2H14 are used as a computing node, wherein one chip 66ak2H14 comprises 1 ARM Core (ARM Core) and 8 DSP cores (DSP cores), and computing operation is completely completed by the DSP cores. Embodiments use one DSP core as the minimum computation granularity, i.e., one DSP core performs one computation operation. The map of an embodiment is mapped to hardware resource computation entities, i.e., DSP cores, in the manner shown.
From left to right, in compute node 1, LS to DSP Core0, RR to DSP Core1, LS0, and LP0 occupy consecutive DSP cores 2 and 3 in operation group units, and so on. After all the DSPCore of compute node 1 is spent, the system finds the nearest compute node 2, and thus maps the remaining LR and LO to DSPCore0 and DSP Core1 of compute node 2 in turn. The solid arrowed lines in the hardware resource domain correspond to the solid arrowed lines in the map domain and represent the propagation path of the data, and the dashed arrowed lines across the map domain and the hardware resource domain represent the mapping of the computing operation to the hardware entity performing the operation. It can be seen that the computations of the same operation group are distributed into the same compute node, i.e., a chip 66ak2H 14; and the connection among different operation groups is sequentially connected according to a principle of being close to a data source, so that the propagation path of the data among the distributed computing resources is shortened.
The foregoing is directed to the preferred embodiment of the present invention and it is noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.

Claims (10)

1. A resource allocation method for an embedded edge computing platform is characterized by comprising the following steps: firstly, arranging a computation graph of a mapping-reduction computation model (MapReduce) by using a scheduler with a Json file as a carrier, counting data propagation paths and node computation operations of the MapReduce computation graph with letter occurrence frequency in a distributed mode, arranging the Json file, and packaging computation operation programs needing additional supplement into a compressed file; then, a computing task is injected into an embedded edge computing platform, the edge computing platform automatically analyzes the Json file to recover a computing graph, and requirements of computing and communication resources are decomposed according to the computing graph to form a mapping graph; dividing a plurality of chained operations with single input and single output into the same operation group during decomposition of the computation graph, ensuring that the computation of the same operation group is distributed into the same computation node during resource mapping, reducing communication transmission among the computation nodes, and sequentially connecting different operation groups according to a principle of being close to a data source, so as to shorten a propagation path of data among distributed computation resources; finally, the mapping graph is used for completing the deployment of the computing requirement to the hardware resource; and if the mapping is not successful, adjusting the decomposition mode of the calculation map according to the reason of failure feedback, forming a new mapping map, and then carrying out re-mapping.
2. The embedded edge computing platform resource allocation method of claim 1, wherein: the computation graph includes: data sources (i.e., alphabetic sources, letter sources, LC), data distribution operations (i.e., polling, RoundRobin, RR), alphabetic splitting operations (i.e., alphabetic splitting, letter split, LS) and alphabetic per-class aggregation operations (i.e., alphabetic partitioning, letter partition, LP) for 3 data processing channels, and Reduce operations (i.e., alphabetic reduction, letter Reduce, LR) and outputs (i.e., alphabetic output, letter output, LO).
3. The embedded edge computing platform resource allocation method of claim 2, wherein: a large number of letters are randomly generated by the data source or entered using an external file.
4. The embedded edge computing platform resource allocation method of claim 2, wherein: randomly generating a data source letter or reading in a large number of letters from an external file; data distribution operation RoundRobin performs a data distribution operation, divides a large number of letters input from the letter source into data fragments according to time periods, and sends the data fragments to 3 data processing channels of a subsequent Map stage in a polling mode.
5. The embedded edge computing platform resource allocation method of claim 4, wherein: the data distribution operation sends data streams input by a data source to 3 data processing channels in a polling mode, the 3 data processing channels perform distributed Map operation (namely, letter split and letter partition) on the received data, and finally, the data streams are converged to letter reduce and then result output letter output.
6. The embedded edge computing platform resource allocation method of claim 1, wherein: the mapping-reduction computation model (MapReduce) cuts large data into a plurality of data fragments, each data fragment is subjected to parallel processing, namely mapping (Map), on different computation nodes of a distributed cluster, the data fragments are merged after the processing is finished, and a processing result, namely reduction (Reduce) is output.
7. The embedded edge computing platform resource allocation method of claim 6, wherein: the 3 data processing channels in the Map stage respectively perform Map operations on the received data fragments in parallel, namely letter splitting operation (letter split) and letter collecting operation (letter partition); and after the Map stage processing is finished, the processing results of the 3 data processing channels are converged and input into a letter Reduce to execute Reduce operation and then output (letter output).
8. The embedded edge computing platform resource allocation method of claim 1, wherein: and after the Json file is analyzed and restored to the calculation graph by the embedded edge calculation platform, decomposing the resource requirements of the tasks to complete the mapping from the calculation resource requirements to the hardware.
9. The embedded edge computing platform resource allocation method of claim 1, wherein: the embedded edge computing platform divides the computing resource requirements in a group distribution mode, namely, a plurality of single-in single-out chained operations are divided into the same operation group.
10. The embedded edge computing platform resource allocation method of claim 9, wherein: when the resources are allocated, the embedded edge computing platform allocates the resources to the same computing node according to the rule that the operation groups are allocated, and when the chain of the chain operation is too long for the computing node, the chain is cut off to form a plurality of operation groups.
CN201910929156.0A 2019-09-28 2019-09-28 Resource allocation method for embedded edge computing platform Active CN110769037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910929156.0A CN110769037B (en) 2019-09-28 2019-09-28 Resource allocation method for embedded edge computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910929156.0A CN110769037B (en) 2019-09-28 2019-09-28 Resource allocation method for embedded edge computing platform

Publications (2)

Publication Number Publication Date
CN110769037A true CN110769037A (en) 2020-02-07
CN110769037B CN110769037B (en) 2021-12-07

Family

ID=69330850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910929156.0A Active CN110769037B (en) 2019-09-28 2019-09-28 Resource allocation method for embedded edge computing platform

Country Status (1)

Country Link
CN (1) CN110769037B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631986A (en) * 2020-12-28 2021-04-09 西南电子技术研究所(中国电子科技集团公司第十研究所) Large-scale DSP parallel computing device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819616A (en) * 2011-12-28 2012-12-12 中华电信股份有限公司 Cloud online real-time multi-dimensional analysis system and method
CN103473121A (en) * 2013-08-20 2013-12-25 西安电子科技大学 Mass image parallel processing method based on cloud computing platform
US20150200867A1 (en) * 2014-01-15 2015-07-16 Cisco Technology, Inc. Task scheduling using virtual clusters
CN109151824A (en) * 2018-10-12 2019-01-04 大唐高鸿信息通信研究院(义乌)有限公司 A kind of library data service extension system and method based on 5G framework
CN109344223A (en) * 2018-09-18 2019-02-15 青岛理工大学 Building information model management system and method based on cloud computing technology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102819616A (en) * 2011-12-28 2012-12-12 中华电信股份有限公司 Cloud online real-time multi-dimensional analysis system and method
CN103473121A (en) * 2013-08-20 2013-12-25 西安电子科技大学 Mass image parallel processing method based on cloud computing platform
US20150200867A1 (en) * 2014-01-15 2015-07-16 Cisco Technology, Inc. Task scheduling using virtual clusters
CN109344223A (en) * 2018-09-18 2019-02-15 青岛理工大学 Building information model management system and method based on cloud computing technology
CN109151824A (en) * 2018-10-12 2019-01-04 大唐高鸿信息通信研究院(义乌)有限公司 A kind of library data service extension system and method based on 5G framework

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钟瑜等: "一种嵌入式边缘计算平台能力评估方法", 《通信技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631986A (en) * 2020-12-28 2021-04-09 西南电子技术研究所(中国电子科技集团公司第十研究所) Large-scale DSP parallel computing device
CN112631986B (en) * 2020-12-28 2024-04-02 西南电子技术研究所(中国电子科技集团公司第十研究所) Large-scale DSP parallel computing device

Also Published As

Publication number Publication date
CN110769037B (en) 2021-12-07

Similar Documents

Publication Publication Date Title
KR102310187B1 (en) A distributed computing system including multiple edges and cloud, and method for providing model for using adaptive intelligence thereof
US9053067B2 (en) Distributed data scalable adaptive map-reduce framework
CN104536937B (en) Big data all-in-one machine realization method based on CPU GPU isomeric groups
US8849888B2 (en) Candidate set solver with user advice
US10380282B2 (en) Distributable and customizable load-balancing of data-associated computation via partitions and virtual processes
US11232009B2 (en) Model-based key performance indicator service for data analytics processing platforms
JP2014525640A (en) Expansion of parallel processing development environment
Vögler et al. Ahab: A cloud‐based distributed big data analytics framework for the Internet of Things
US10218622B2 (en) Placing a network device into a maintenance mode in a virtualized computing environment
CN105793822A (en) Dynamic shuffle reconfiguration
US10516729B2 (en) Dynamic graph adaptation for stream processing over hybrid, physically disparate analytics platforms
Yin et al. Scalable mapreduce framework on fpga accelerated commodity hardware
US20170371713A1 (en) Intelligent resource management system
Parizotto et al. Offloading machine learning to programmable data planes: A systematic survey
CN110769037B (en) Resource allocation method for embedded edge computing platform
Mohamed et al. A survey of big data machine learning applications optimization in cloud data centers and networks
CN116775041B (en) Real-time decision engine implementation method based on stream calculation and RETE algorithm
CN107168795B (en) Codon deviation factor model method based on CPU-GPU isomery combined type parallel computation frame
Sahebi et al. Distributed large-scale graph processing on FPGAs
Roman et al. Understanding spark performance in hybrid and multi-site clouds
da Rosa Righi et al. Designing Cloud-Friendly HPC Applications
Brasilino et al. Data Distillation at the Network's Edge: Exposing Programmable Logic with InLocus
da Silva Veith Quality of service aware mechanisms for (re) configuring data stream processing applications on highly distributed infrastructure
US20180349528A1 (en) Scalable Update Propagation Via Query Aggregations and Connection Migrations
Yang et al. Processing in memory assisted MEC 3C resource allocation for computation offloading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant