US20230136612A1 - Optimizing concurrent execution using networked processing units - Google Patents

Optimizing concurrent execution using networked processing units Download PDF

Info

Publication number
US20230136612A1
US20230136612A1 US18/090,749 US202218090749A US2023136612A1 US 20230136612 A1 US20230136612 A1 US 20230136612A1 US 202218090749 A US202218090749 A US 202218090749A US 2023136612 A1 US2023136612 A1 US 2023136612A1
Authority
US
United States
Prior art keywords
task
workload
tasks
remediation
compute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/090,749
Inventor
Kshitij Arun Doshi
Francesc Guim Bernat
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US18/090,749 priority Critical patent/US20230136612A1/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Guim Bernat, Francesc, DOSHI, KSHITIJ ARUN
Publication of US20230136612A1 publication Critical patent/US20230136612A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0889Techniques to speed-up the configuration process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0793Remedial or corrective actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible
    • G06F12/0851Cache with interleaved addressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0873Mapping of cache memory to specific storage devices or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3005Arrangements for executing specific machine instructions to perform operations for flow control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0876Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/12Applying verification of the received information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1087Peer-to-peer [P2P] networks using cross-functional networking aspects
    • H04L67/1091Interfacing with client-server systems or between P2P systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Embodiments described herein generally relate to data processing, network communication, and communication system implementations of distributed computing, including the implementations with the use of networked processing units such as infrastructure processing units (IPUs) or data processing units (DPUs).
  • networked processing units such as infrastructure processing units (IPUs) or data processing units (DPUs).
  • IPUs infrastructure processing units
  • DPUs data processing units
  • Deployments are moving to highly distributed multi-edge and multi-tenant deployments. Deployments may have different limitations in terms of power and space. Deployments also may use different types of compute, acceleration, and storage technologies in order to overcome these power and space limitations. Deployments also are typically interconnected in tiered and/or peer-to-peer fashion, in an attempt to create a network of connected devices and edge appliances that work together.
  • Edge computing at a general level, has been described as systems that provide the transition of compute and storage resources closer to endpoint devices at the edge of a network (e.g., consumer computing devices, user equipment, etc.). As compute and storage resources are moved closer to endpoint devices, a variety of advantages have been promised such as reduced application latency, improved service capabilities, improved compliance with security or data privacy requirements, improved backhaul bandwidth, improved energy consumption, and reduced cost. However, many deployments of edge computing technologies—especially complex deployments for use by multiple tenants—have not been fully adopted.
  • FIG. 1 illustrates an overview of a distributed edge computing environment, according to an example
  • FIG. 2 depicts computing hardware provided among respective deployment tiers in a distributed edge computing environment, according to an example
  • FIG. 3 depicts additional characteristics of respective deployments tiers in a distributed edge computing environment, according to an example
  • FIG. 4 depicts a computing system architecture including a compute platform and a network processing platform provided by an infrastructure processing unit, according to an example
  • FIG. 5 depicts an infrastructure processing unit arrangement operating as a distributed network processing platform within network and data center edge settings, according to an example
  • FIG. 6 depicts functional components of an infrastructure processing unit and related services, according to an example
  • FIG. 7 depicts a block diagram of example components in an edge computing system which implements a distributed network processing platform, according to an example
  • FIG. 8 depicts an arrangement of distributed processing provided at an edge computing network layer, according to an example
  • FIG. 9 depicts a task graph illustrating scenarios for optimization of concurrent task execution, according to an example
  • FIG. 10 depicts a workflow sequence for identifying and triggering remediation for concurrent execution bottlenecks, according to an example
  • FIG. 11 depicts a further example scenario of concurrent task execution, according to an example.
  • FIG. 12 depicts a flowchart of an example method for optimizing concurrent execution of workload tasks, according to an example.
  • the following introduces various techniques to deploy, identify, manage, and respond to concurrent execution of tasks, including to optimize join points for such tasks.
  • Such optimization may provide significant advantages in a distributed compute environment (such as using the distributed IPU architecture discussed in the following paragraphs).
  • such techniques enable improved power efficiency by selectively applying increased power or resources only for tasks that need such capability.
  • the following provides a precise identification of a task to be moved, re-deployed, or replicated, without needing to waste computation or power resources. Additional details on such optimization techniques are provided after a discussion of distributed edge computing scenarios.
  • FIG. 1 is a block diagram 100 showing an overview of a distributed edge computing environment, which may be adapted for implementing the present techniques for distributed networked processing units.
  • the edge cloud 110 is established from processing operations among one or more edge locations, such as a satellite vehicle 141 , a base station 142 , a network access point 143 , an on premise server 144 , a network gateway 145 , or similar networked devices and equipment instances. These processing operations may be coordinated by one or more edge computing platforms 120 or systems that operate networked processing units (e.g., IPUs, DPUs) as discussed herein.
  • networked processing units e.g., IPUs, DPUs
  • the edge cloud 110 is generally defined as involving compute that is located closer to endpoints 160 (e.g., consumer and producer data sources) than the cloud 130 , such as autonomous vehicles 161 , user equipment 162 , business and industrial equipment 163 , video capture devices 164 , drones 165 , smart cities and building devices 166 , sensors and IoT devices 167 , etc.
  • Compute, memory, network, and storage resources that are offered at the entities in the edge cloud 110 can provide ultra-low or improved latency response times for services and functions used by the endpoint data sources as well as reduce network backhaul traffic from the edge cloud 110 toward cloud 130 thus improving energy consumption and overall network usages among other benefits.
  • Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer end point devices than at a base station or a central office data center).
  • edge computing attempts to minimize the number of resources needed for network services, through the distribution of more resources that are located closer both geographically and in terms of in-network access time.
  • FIG. 2 depicts examples of computing hardware provided among respective deployment tiers in a distributed edge computing environment.
  • one tier at an on-premise edge system is an intelligent sensor or gateway tier 210 , which operates network devices with low power and entry-level processors and low-power accelerators.
  • Another tier at an on-premise edge system is an intelligent edge tier 220 , which operates edge nodes with higher power limitations and may include a high-performance storage.
  • a network edge tier 230 operates servers including form factors optimized for extreme conditions (e.g., outdoors).
  • a data center edge tier 240 operates additional types of edge nodes such as servers, and includes increasingly powerful or capable hardware and storage technologies.
  • a core data center tier 250 and a public cloud tier 260 operate compute equipment with the highest power consumption and largest configuration of processors, acceleration, storage/memory devices, and highest throughput network.
  • tiers various forms of Intel® processor lines are depicted for purposes of illustration; it will be understood that other brands and manufacturers of hardware will be used in real-world deployments. Additionally, it will be understood that additional features or functions may exist among multiple tiers.
  • One such example is connectivity and infrastructure management that enable a distributed IPU architecture, that can potentially extend across all of tiers 210 , 220 , 230 , 240 , 250 , 260 .
  • Other relevant functions that may extend across multiple tiers may relate to security features, domain or group functions, and the like.
  • FIG. 3 depicts additional characteristics of respective deployment tiers in a distributed edge computing environment, based on the tiers discussed with reference to FIG. 2 .
  • This figure depicts additional network latencies at each of the tiers 210 , 220 , 230 , 240 , 250 , 260 , and the gradual increase in latency in the network as the compute is located at a longer distance from the edge endpoints. Additionally, this figure depicts additional power and form factor constraints, use cases, and key performance indicators (KPIs).
  • KPIs key performance indicators
  • edge computing within the edge cloud 110 may provide the ability to serve and respond to multiple applications of the use cases in real-time or near real-time and meet ultra-low latency requirements.
  • networking has become one of the fundamental pieces of the architecture that allow achieving scale with resiliency, security, and reliability.
  • Networking technologies have evolved to provide more capabilities beyond pure network routing capabilities, including to coordinate quality of service, security, multi-tenancy, and the like. This has also been accelerated by the development of new smart network adapter cards and other type of network derivatives that incorporated capabilities such as ASICs (application-specific integrated circuits) or FPGAs (field programmable gate arrays) to accelerate some of those functionalities (e.g., remote attestation).
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • networked processing units have begun to be deployed at network cards (e.g., smart NICs), gateways, and the like, which allow direct processing of network workloads and operations.
  • a networked processing unit is an infrastructure processing unit (IPU), which is a programmable network device that can be extended to provide compute capabilities with far richer functionalities beyond pure networking functions.
  • IPU infrastructure processing unit
  • DPU data processing unit
  • the following discussion refers to functionality applicable to an IPU configuration, such as that provided by an Intel® line of IPU processors. However, it will be understood that functionality will be equally applicable to DPUs and other types of networked processing units provided by ARM®, Nvidia®, and other hardware OEMs.
  • FIG. 4 depicts an example compute system architecture that includes a compute platform 420 and a network processing platform comprising an IPU 410 .
  • the main compute platform 420 is composed by typical elements that are included with a computing node, such as one or more CPUs 424 that may or may not be connected via a coherent domain (e.g., via Ultra Path Interconnect (UPI) or another processor interconnect); one or more memory units 425 ; one or more additional discrete devices 426 such as storage devices, discrete acceleration cards (e.g., a field-programmable gate array (FPGA), a visual processing unit (VPU), etc.); a baseboard management controller 421 ; and the like.
  • the compute platform 420 may operate one or more containers 422 (e.g., with one or more microservices), within a container runtime 423 (e.g., Docker containerd).
  • the IPU 410 operates as a networking interface and is connected to the compute platform 420 using an interconnect (e.g., using either PCIe or CXL).
  • the IPU 410 in this context, can be observed as another small compute device that has its own: (1) Processing cores (e.g., provided by low-power cores 417 ), (2) operating system (OS) and cloud native platform 414 to operate one or more containers 415 and a container runtime 416 ; (3) Acceleration functions provided by an ASIC 411 or FPGA 412 ; (4) Memory 418 ; (5) Network functions provided by network circuitry 413 ; etc.
  • the IPU 410 is seen as a discrete device from the local host (e.g., the OS running in the compute platform CPUs 424 ) that is available to provide certain functionalities (networking, acceleration etc.). Those functionalities are typically provided via Physical or Virtual PCIe functions. Additionally, the IPU 410 is seen as a host (with its own IP etc.) that can be accessed by the infrastructure to setup an OS, run services, and the like. The IPU 410 sees all the traffic going to the compute platform 420 and can perform actions—such as intercepting the data or performing some transformation—as long as the correct security credentials are hosted to decrypt the traffic.
  • the local host e.g., the OS running in the compute platform CPUs 424
  • Those functionalities are typically provided via Physical or Virtual PCIe functions.
  • the IPU 410 is seen as a host (with its own IP etc.) that can be accessed by the infrastructure to setup an OS, run services, and the like.
  • the IPU 410 sees all the traffic going to the
  • Traffic going through the IPU goes to all the layers of the OSI (Open Systems Interconnection) model stack (e.g., from physical to application layer). Depending on the features that the IPU has, processing may be performed at the transport layer only. However, if the IPU has capabilities to perform traffic intercept, then the IPU also may be able to intercept traffic at the traffic layer (e.g., intercept CDN traffic and process it locally).
  • OSI Open Systems Interconnection
  • IPUs and similar networked processing units include: to accelerate network processing; to manage hosts (e.g., in a data center); or to implement quality of service policies.
  • hosts e.g., in a data center
  • quality of service policies e.g., to implement quality of service policies.
  • most of functionalities today are focused at using the IPU at the local appliance level and within a single system. These approaches do not address how the IPUs could work together in a distributed fashion or how system functionalities can be divided among the IPUs on other parts of the system. Accordingly, the following introduces enhanced approaches for enabling and controlling distributed functionality among multiple networked processing units. This enables the extension of current IPU functionalities to work as a distributed set of IPUs that can work together to achieve stronger features such as, resiliency, reliability, etc.
  • FIG. 5 depicts an IPU arrangement operating as a distributed network processing platform within network and data center edge settings.
  • workloads or processing requests are directly provided to an IPU platform, such as directly to IPU 514 .
  • workloads or processing requests are provided to some intermediate processing device 512 , such as a gateway or NUC (next unit of computing) device form factor, and the intermediate processing device 512 forwards the workloads or processing requests to the IPU 514 .
  • some intermediate processing device 512 such as a gateway or NUC (next unit of computing) device form factor
  • the IPU 514 directly receives data from use cases 502 A.
  • the IPU 514 operates one or more containers with microservices to perform processing of the data.
  • a small gateway e.g., a NUC type of appliance
  • the IPU 514 may process data as a small aggregator of sensors that runs on the far edge, or may perform some level of inline or preprocessing and that sends payload to be further processed by the IPU or the system that the IPU connects.
  • the intermediate processing device 512 provided by the gateway or NUC receives data from use cases 502 B.
  • the intermediate processing device 512 includes various processing elements (e.g., CPU cores, GPUs), and may operate one or more microservices for servicing workloads from the use cases 502 B.
  • the intermediate processing device 512 invokes the IPU 514 to complete processing of the data.
  • the IPU 514 may connect with a local compute platform, such as that provided by a CPU 516 (e.g., Intel® Xeon CPU) operating multiple microservices.
  • the IPU may also connect with a remote compute platform, such as that provided at a data center by CPU 540 at a remote server.
  • a microservice that performs some analytical processing (e.g., face detection on image data), where the CPU 516 and the CPU 540 provide access to this same microservice.
  • the IPU 514 depending on the current load of the CPU 516 and the CPU 540 , may decide to forward the images or payload to one of the two CPUs. Data forwarding or processing can also depend on other factors such as SLA for latency or performance metrics (e.g., perf/watt) in the two systems.
  • the distributed IPU architecture may accomplish features of load balancing.
  • the IPU in the computing environment 510 may be coordinated with other network-connected IPUs.
  • a Service and Infrastructure orchestration manager 530 may use multiple IPUs as a mechanism to implement advanced service processing schemes for the user stacks. This may also enable implementing of system functionalities such as failover, load balancing etc.
  • IPUs can be arranged in the following non-limiting configurations.
  • a particular IPU e.g., IPU 514
  • IPUs e.g., IPU 520
  • an IPU can be configured to forward traffic to service replicas that runs on other systems when a local host does not respond.
  • a particular IPU e.g., IPU 514
  • IPU 520 can work with other IPUs to perform load balancing across other systems. For example, consider a scenario where CDN traffic targeted to the local host is forwarded to another host in case that I/O or compute in the local host is scarce at a given moment.
  • a particular IPU e.g., IPU 514
  • IPU 514 can work as a power management entity to implement advanced system policies. For example, consider a scenario where the whole system (e.g., including CPU 516 ) is placed in a C6 state (a low-power/power-down state available to a processor) while forwarding traffic to other systems (e.g., IPU 520 ) and consolidating it.
  • C6 state a low-power/power-down state available to a processor
  • edge computing systems may be adapted to include coordinated IPUs, and how such deployments can be orchestrated to use IPUs at multiple locations to expand to the new envisioned functionality.
  • FIG. 6 depicts functional components of an IPU 610 , including services and features to implement the distributed functionality discussed herein. It will be understood that some or all of the functional components provided in FIG. 6 may be distributed among multiple IPUs, hardware components, or platforms, depending on the particular configuration and use case involved.
  • a number of functional components are operated to manage requests for a service running in the IPU (or running in the local host).
  • IPUs can either run services or intercept requests arriving to services running in the local host and perform some action. In the latter case, the IPU can perform the following types of actions/functions (provided as a non-limiting examples).
  • each IPU is provided with Peer Discovery logic to discover other IPUs in the distributed system that can work together with it.
  • Peer Discovery logic may use mechanisms such as broadcasting to discover other IPUs that are available on a network.
  • the Peer Discovery logic is also responsible to work with the Peer Attestation and
  • Authentication logic to validate and authenticate the peer IPU' s identity, determine whether they are trustworthy, and whether the current system tenant allows the current IPU to work with them.
  • an IPU may perform operations such as: retrieve a proof of identity and proof of attestation; connect to a trusted service running in a trusted server; or, validate that the discovered system is trustworthy.
  • Various technologies including hardware components or standardized software implementations) that enable attestation, authentication, and security may be used with such operations.
  • each IPU provides interfaces to other IPUs to enable attestation of the IPU itself.
  • IPU Attestation logic is used to perform an attestation flow within a local IPU in order to create the proof of identity that will be shared with other IPUs. Attestation here may integrate previous approaches and technologies to attest a compute platform. This may also involve the use of trusted attestation service 640 to perform the attestation operations.
  • a particular IPU includes capabilities to discover the functionalities that peer IPUs provide. Once the authentication is done, the IPU can determine what functionalities that the peer IPUs provide (using the IPU Peer Discovery Logic) and store a record of such functionality locally. Examples of properties to discover can include: (i) Type of IPU and functionalities provided and associated KPIs (e.g.
  • enclaves e.g., enclaves provided by Intel® SGX or TDX technologies
  • Current services that are running on the IPU and on the system that can potentially accept requests forwarded from this IPU; or
  • Other interfaces or hooks that are provided by an IPU, such as: Access to remote storage; Access to a remote VPU; Access to certain functions.
  • service may be described by properties such as: UUID; Estimated performance KPIs in the host or IPU; Average performance provided by the system during the N units of time (or any other type of indicator); and like properties.
  • the IPU includes functionality to manage services that are running either on the host compute platform or in the IPU itself.
  • Managing (orchestration) services includes performance service and resource orchestration for the services that can run on the IPU or that the IPU can affect.
  • Two type of usage models are envisioned:
  • the IPU may enable external orchestrators to deploy services on the IPU compute capabilities.
  • an IPU includes a component similar to K8 compatible APIs to manage the containers (services) that run on the IPU itself.
  • the IPU may run a service that is just providing content to storage connected to the platform.
  • the orchestration entity running in the IPU may manage the services running in the IPU as it happens in other systems (e.g. keeping the service level objectives).
  • external orchestrators can be allowed to register to the IPU that services are running on the host may require to broker requests, implement failover mechanisms and other functionalities. For example, an external orchestrator may register that a particular service running on the local compute platform is replicated in another edge node managed by another IPU where requests can be forwarded.
  • external orchestrators may provide to the Service/Application Intercept logic the inputs that are needed to intercept traffic for these services (as typically is encrypted). This may include properties such as a source and destination traffic of the traffic to be intercepted, or the key to use to decrypt the traffic. Likewise, this may be needed to terminate TLS to understand the requests that arrive to the IPU and that the other logics may need to parse to take actions. For example, if there is a CDN read request the IPU may need to decrypt the packet to understand that network packet includes a read request and may redirect it to another host based on the content that is being intercepted. Examples of Service/Application Intercept information is depicted in table 620 in FIG. 6 .
  • External orchestration can be implemented in multiple topologies.
  • One supported topology includes having the orchestrator managing all the IPUs running on the backend public or private cloud.
  • Another supported topology includes having the orchestrator managing all the IPUs running in a centralized edge appliance.
  • Still another supported topology includes having the orchestrator running in another IPU that is working as the controller or having the orchestrator running distributed in multiple other IPUs that are working as controllers (master/primary node), or in a hierarchical arrangement.
  • the IPU may include Service Request Brokering logic and Load Balancing logic to perform brokering actions on arrival for requests of target services running in the local system. For instance, the IPU may decide to see if those requests can be executed by other peer systems (e.g., accessible through Service and Infrastructure Orchestration 630 ). This can be caused, for example, because load in the local systems is high.
  • the local IPU may negotiate with other peer IPUs for the possibility to forward the request. Negotiation may involve metrics such as cost. Based on such negotiation metrics, the IPU may decide to forward the request.
  • the Service Request Brokering and Load Balancing logic may distribute requests arriving to the local IPU to other peer IPUs.
  • the other IPUs and the local IPU work together and do not necessarily need brokering.
  • Such logic acts similar to a cloud native sidecar proxy. For instance, requests arriving to the system may be sent to the service X running in the local system (either IPU or compute platform) or forwarded to a peer IPU that has another instance of service X running
  • the load balancing distribution can be based on existing algorithms such as based on the systems that have lower load, using round robin, etc.
  • the IPU includes Reliability and Failover logic to monitor the status of the services running on the compute platform or the status of the compute platform itself.
  • the Reliability and Failover logic may require the Load Balancing logic to transiently or permanently forward requests that aim specific services in situations such as where: i) The compute platform is not responding; ii) The service running inside the compute node is not responding; and iii) The compute platform load prevents the targeted service to provide the right level of service level objectives (SLOs). Note that the logic must know the required SLOs for the services.
  • Such functionality may be coordinated with service information 650 including SLO information.
  • the IPU may include a workload pipeline execution logic that understands how workloads are composed and manage their execution.
  • Workloads can be defined as a graph that connects different microservices.
  • the load balancing and brokering logic may be able to understand those graphs and decide what parts of the pipeline are executed where. Further, to perform these and other operations, Intercept logic will also decode what requests are included as part of the requests.
  • a distributed network processing configuration may enable IPUs to perform important role for managing resources of edge appliances.
  • the functional components of an IPU can operate to perform these and similar types of resource management functionalities.
  • an IPU can provide management or access to external resources that are hosted in other locations and expose them as local resources using constructs such as Compute Express Link (CXL).
  • CXL Compute Express Link
  • the IPU could potentially provide access to a remote accelerator that is hosted in a remote system via CXL.mem/cache and JO.
  • Another example includes providing access to remote storage device hosted in another system.
  • the local IPU could work with another IPU in the storage system and expose the remote system as PCIE VF/PF (virtual functions/physical functions) to the local host.
  • an IPU can provide access to IPU-specific resources.
  • Those IPU resource may be physical (such as storage or memory) or virtual (such as a service that provides access to random number generation).
  • an IPU can manage local resources that are hosted in the system where it belongs.
  • the IPU can manage power of the local compute platform.
  • an IPU can provide access to other type of elements that relate to resources (such as telemetry or other types of data).
  • resources such as telemetry or other types of data.
  • telemetry provides useful data for something that is needed to decide where to execute things or to identify problems.
  • the IPU can also include functionality to manage I/O from the system perspective.
  • the IPU includes Host Virtualization and XPU Pooling logic responsible to manage the access to resources that are outside the system domain (or within the IPU) and that can be offered to the local compute system.
  • XPU refers to any type of a processing unit, whether CPU, GPU, VPU, an acceleration processing unit, etc.
  • the IPU logic after discovery and attestation, can agree with other systems to share external resources with the services running in the local system.
  • IPUs may advertise to other peers available resources or can be discovered during discovery phase as introduced earlier. IPUs may request to other IPUS to those resources. For example, an IPU on system A may request access to storage on system B manage by another IPU. Remote and local IPUs can work together to establish a connection between the target resources and the local system.
  • resources can be exposed to the services running in the local compute node using the VF/PF PCIE and CXL Logic. Each of those resources can be offered as VF/PF.
  • the IPU logic can expose to the local host resources that are hosted in the IPU. Examples of resources to expose may include local accelerators, access to services, and the like.
  • Power Management is one of the key features to achieve favorable system operational expenditures (OPEXs). IPU is very well positioned to optimize power consumption that the local system is consuming.
  • the Distributed and local power management unit Is responsible to meter the power that the system is consuming, the load that the system is receiving and track the service level agreements that the various services running in the system are achieving for the arriving requests.
  • PUE power usage effectiveness
  • the IPU may decide to forward the requests to local services to other IPUs that host replicas of the services.
  • Such power management features may also coordinate with the Brokering and Load Balancing logic discussed above. As will be understood, IPUs can work together to decide where requests can be consolidated to establish higher power efficiency as system. When traffic is redirected, the local power consumption can be reduced in different ways.
  • Example operations that can be performed include: changing the system to C6 State; changing the base frequencies; performing other adaptations of the system or system components.
  • Telemetry Metrics The IPU can generate multiple types of metrics that can be interesting from services, orchestration or tenants owning the system.
  • telemetry can be accessed, including: (i) Out of band via side interfaces; (ii) In band by services running in the IPU; or (iii) Out of band using PCIE or CXL from the host perspective.
  • Relevant types of telemetries can include: Platform telemetry; Service Telemetry; IPU telemetry; Traffic telemetry; and the like.
  • Remote IPUs accessed via an IP Network, such as within certain latency for data plane offload/storage offloads (or, connected for management/control plane operations); or
  • Distributed IPUs providing an interconnected network of IPUs, including as many as hundreds of nodes within a domain.
  • Configurations of distributed IPUs working together may also include fragmented distributed IPUs, where each IPU or pooled system provides part of the functionalities, and each IPU becomes a malleable system.
  • Configurations of distributed IPUs may also include virtualized IPUs, such as provided by a gateway, switch, or an inline component (e.g., inline between the service acting as IPU), and in some examples, in scenarios where the system has no IPU.
  • IPU-to-IPU in the same tier or a close tier
  • IPU-to-IPU in the cloud data to compute versus compute to data
  • integration in small device form factors e.g., gateway IPUs
  • gateway/NUC+IPU which connects to a data center
  • multiple GW/NUC e.g. 16
  • gateway/NUC+IPU on the server e.g. switch
  • GW/NUC and IPU which are connected to a server with an IPU.
  • the preceding distributed IPU functionality may be implemented among a variety of types of computing architectures, including one or more gateway nodes, one or more aggregation nodes, or edge or core data centers distributed across layers of the network (e.g., in the arrangements depicted in FIGS. 2 and 3 ). Accordingly, such IPU arrangements may be implemented in an edge computing system by or on behalf of a telecommunication service provider (“telco”, or “TSP”), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities.
  • telco telecommunication service provider
  • CSP cloud service provider
  • Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives.
  • Such edge computing systems may be embodied as a type of device, appliance, computer, or other “thing” capable of communicating with other edge, networking, or endpoint components.
  • FIG. 7 depicts a block diagram of example components in a computing device 750 which can operate as a distributed network processing platform.
  • the computing device 750 may include any combinations of the components referenced above, implemented as integrated circuits (ICs), as a package or system-on-chip (SoC), or as portions thereof, discrete electronic devices, or other modules, logic, instruction sets, programmable logic or algorithms, hardware, hardware accelerators, software, firmware, or a combination thereof adapted in the computing device 750 , or as components otherwise incorporated within a larger system.
  • the computing device 750 may include processing circuitry comprising one or both of a network processing unit 752 (e.g., an IPU or DPU, as discussed above) and a compute processing unit 754 (e.g., a CPU).
  • a network processing unit 752 e.g., an IPU or DPU, as discussed above
  • a compute processing unit 754 e.g., a CPU
  • the network processing unit 752 may provide a networked specialized processing unit such as an IPU, DPU, network processing unit (NPU), or other “xPU” outside of the central processing unit (CPU).
  • the processing unit may be embodied as a standalone circuit or circuit package, integrated within an SoC, integrated with networking circuitry (e.g., in a SmartNIC), or integrated with acceleration circuitry, storage devices, or AI or specialized hardware, consistent with the examples above.
  • the compute processing unit 754 may provide a processor as a central processing unit (CPU) microprocessor, multi-core processor, multithreaded processor, an ultra-low voltage processor, an embedded processor, or other forms of a special purpose processing unit or specialized processing unit for compute operations.
  • CPU central processing unit
  • multi-core processor multi-core processor
  • multithreaded processor multithreaded processor
  • ultra-low voltage processor an ultra-low voltage processor
  • embedded processor or other forms of a special purpose processing unit or specialized processing unit for compute operations.
  • Either the network processing unit 752 or the compute processing unit 754 may be a part of a system on a chip (SoC) which includes components formed into a single integrated circuit or a single package.
  • SoC system on a chip
  • the network processing unit 752 or the compute processing unit 754 and accompanying circuitry may be provided in a single socket form factor, multiple socket form factor, or a variety of other formats.
  • the processing units 752 , 754 may communicate with a system memory 756 (e.g., random access memory (RAM)) over an interconnect 755 (e.g., a bus).
  • the system memory 756 may be embodied as volatile (e.g., dynamic random access memory (DRAM), etc.) memory. Any number of memory devices may be used to provide for a given amount of system memory.
  • a storage 758 may also couple to the processor 752 via the interconnect 755 to provide for persistent storage of information such as data, applications, operating systems, and so forth.
  • the storage 758 may be implemented as non-volatile storage such as a solid-state disk drive (SSD).
  • SSD solid-state disk drive
  • the components may communicate over the interconnect 755 .
  • the interconnect 755 may include any number of technologies, including industry-standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), Compute Express Link (CXL), or any number of other technologies.
  • ISA industry-standard architecture
  • EISA extended ISA
  • PCI peripheral component interconnect
  • PCIx peripheral component interconnect extended
  • PCIe PCI express
  • CXL Compute Express Link
  • the interconnect 755 may couple the processing units 752 , 754 to a transceiver 766 , for communications with connected edge devices 762 .
  • the transceiver 766 may use any number of frequencies and protocols.
  • a wireless local area network (WLAN) unit may implement Wi-Fi® communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, or a wireless wide area network (WWAN) unit may implement wireless wide area communications according to a cellular, mobile network, or other wireless wide area protocol.
  • the wireless network transceiver 766 (or multiple transceivers) may communicate using multiple standards or radios for communications at a different range.
  • a wireless network transceiver 766 e.g., a radio transceiver
  • the communication circuitry may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., a cellular networking protocol such a 3GPP 4G or 5G standard, a wireless local area network protocol such as IEEE 802.11/Wi-Fi®, a wireless wide area network protocol, Ethernet, Bluetooth®, Bluetooth Low Energy, an IoT protocol such as IEEE 802.15.4 or ZigBee®, Matter®, low-power wide-area network (LPWAN) or low-power wide-area (LPWA) protocols, etc.) to effect such communication.
  • a cellular networking protocol such as 3GPP 4G or 5G standard
  • a wireless local area network protocol such as IEEE 802.11/Wi-Fi®
  • a wireless wide area network protocol such as IEEE 802.11/Wi-Fi®
  • Ethernet a wireless wide area network protocol
  • Bluetooth® Bluetooth Low Energy
  • IoT protocol such as IEEE 802.15.4 or ZigBee®
  • Matter® low-power wide-area network (LPWAN) or
  • applicable communications circuitry used by the device may include or be embodied by any one or more of components 766 , 768 , or 770 . Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, etc.) may be embodied by such communications circuitry.
  • the computing device 750 may include or be coupled to acceleration circuitry 764 , which may be embodied by one or more AI accelerators, a neural compute stick, neuromorphic hardware, an FPGA, an arrangement of GPUs, one or more SoCs, one or more CPUs, one or more digital signal processors, dedicated ASICs, or other forms of specialized processors or circuitry designed to accomplish one or more specialized tasks.
  • These tasks may include AI processing (including machine learning, training, inferencing, and classification operations), visual data processing, network data processing, object detection, rule analysis, or the like. Accordingly, in various examples, applicable means for acceleration may be embodied by such acceleration circuitry.
  • the interconnect 755 may couple the processing units 752 , 754 to a sensor hub or external interface 770 that is used to connect additional devices or subsystems.
  • the devices may include sensors 772 , such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, global navigation system (e.g., GPS) sensors, pressure sensors, pressure sensors, and the like.
  • the hub or interface 770 further may be used to connect the edge computing node 750 to actuators 774 , such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like.
  • various input/output (I/O) devices may be present within or connected to, the edge computing node 750 .
  • a display or other output device 784 may be included to show information, such as sensor readings or actuator position.
  • An input device 786 such as a touch screen or keypad may be included to accept input.
  • An output device 784 may include any number of forms of audio or visual display, including simple visual outputs such as LEDs or more complex outputs such as display screens (e.g., LCD screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the edge computing node 750 .
  • a battery 776 may power the edge computing node 750 , although, in examples in which the edge computing node 750 is mounted in a fixed location, it may have a power supply coupled to an electrical grid, or the battery may be used as a backup or for temporary capabilities.
  • a battery monitor/charger 778 may be included in the edge computing node 750 to track the state of charge (SoCh) of the battery 776 .
  • the battery monitor/charger 778 may be used to monitor other parameters of the battery 776 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery 776 .
  • a power block 780 or other power supply coupled to a grid, may be coupled with the battery monitor/charger 778 to charge the battery 776 .
  • the instructions 782 on the processing units 752 , 754 may configure execution or operation of a trusted execution environment (TEE) 790 .
  • TEE trusted execution environment
  • the TEE 790 operates as a protected area accessible to the processing units 752 , 754 for secure execution of instructions and secure access to data.
  • Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in the edge computing node 750 through the TEE 790 and the processing units 752 , 754 .
  • the computing device 750 may be a server, appliance computing devices, and/or any other type of computing device with the various form factors discussed above.
  • the computing device 750 may be provided by an appliance computing device that is a self-contained electronic device including a housing, a chassis, a case, or a shell.
  • the instructions 782 provided via the memory 756 , the storage 758 , or the processing units 752 , 754 may be embodied as a non-transitory, machine-readable medium 760 including code to direct the processor 752 to perform electronic operations in the edge computing node 750 .
  • the processing units 752 , 754 may access the non-transitory, machine-readable medium 760 over the interconnect 755 .
  • the non-transitory, machine-readable medium 760 may be embodied by devices described for the storage 758 or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices.
  • the non-transitory, machine-readable medium 760 may include instructions to direct the processing units 752 , 754 to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality discussed herein.
  • the terms “machine-readable medium”, “machine-readable storage”, “computer-readable storage”, and “computer-readable medium” are interchangeable.
  • a machine-readable medium also includes any tangible medium that is capable of storing, encoding, or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions.
  • a “machine-readable medium” thus may include but is not limited to, solid-state memories, and optical and magnetic media.
  • the instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., HTTP).
  • a machine-readable medium may be provided by a storage device or other apparatus which is capable of hosting data in a non-transitory format.
  • information stored or otherwise provided on a machine-readable medium may be representative of instructions, such as instructions themselves or a format from which the instructions may be derived.
  • This format from which the instructions may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like.
  • the information representative of the instructions in the machine-readable medium may be processed by processing circuitry into the instructions to implement any of the operations discussed herein.
  • deriving the instructions from the information may include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions.
  • the derivation of the instructions may include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions from some intermediate or preprocessed format provided by the machine-readable medium.
  • the information when provided in multiple parts, may be combined, unpacked, and modified to create the instructions.
  • the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers.
  • a software distribution platform (e.g., one or more servers and one or more storage devices) may be used to distribute software, such as the example instructions discussed above, to one or more devices, such as example processor platform(s) and/or example connected edge devices noted above.
  • the example software distribution platform may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices.
  • the providing entity is a developer, a seller, and/or a licensor of software
  • the receiving entity may be consumers, users, retailers, OEMs, etc., that purchase and/or license the software for use and/or re-sale and/or sub-licensing.
  • the instructions are stored on storage devices of the software distribution platform in a particular format.
  • a format of computer readable instructions includes, but is not limited to a particular code language (e.g., Java, JavaScript, Python, C, C#, SQL, HTML, etc.), and/or a particular code state (e.g., uncompiled code (e.g., ASCII), interpreted code, linked code, executable code (e.g., a binary), etc.).
  • the computer readable instructions stored in the software distribution platform are in a first format when transmitted to an example processor platform(s).
  • the first format is an executable binary in which particular types of the processor platform(s) can execute.
  • the first format is uncompiled code that requires one or more preparation tasks to transform the first format to a second format to enable execution on the example processor platform(s).
  • the receiving processor platform(s) may need to compile the computer readable instructions in the first format to generate executable code in a second format that is capable of being executed on the processor platform(s).
  • the first format is interpreted code that, upon reaching the processor platform(s), is interpreted by an interpreter to facilitate execution of instructions.
  • the following discussion refers to various examples of concurrent execution of workloads and workload tasks among distributed compute locations.
  • optimized dataflows and low latencies can be provided for concurrently executed tasks.
  • the following data flow adaptations provide a technical improvement for the utilization of heterogeneous computation resources by reducing response times in closed-loop admission control systems.
  • FIG. 8 depicts an example arrangement of distributed processing provided at an edge computing network layer, using a distributed infrastructure processing unit mesh network. Specifically, FIG. 8 depicts a computing operations coordinated among a user layer 810 , an edge layer 820 , and a cloud layer 830 . Consistent with the examples discussed above (e.g., with reference to FIGS. 1 to 3 ), edge computing operations may be performed at the edge layer 820 based on requests from client devices or consumers at the user layer 810 , such as from one or more heterogenous networks 812 , a vehicular network 814 , a machine-to-machine (M2M) or device-to-device (D2D) network (not shown), or other network arrangements.
  • the edge layer 820 may further invoke a cloud layer 830 and cloud services 832 to perform further data processing or data retrieval (e.g., at one or more remote data centers or offices).
  • a variety of disaggregated resources available in the edge layer 820 may be combined, pooled, or coordinated in order to perform tasks (e.g., executable portions or segments of one or multiple workloads) for clients and other consumers.
  • resources at a first base station 822 A including compute resources 842 A, may be coordinated with the compute resources 842 B at a second base station 822 B.
  • Other types of resources, not shown, may include communication resources, storage and caching resources, and the like, provided among a variety of devices or nodes.
  • the resources may be arranged into compute pools, memory pools, or storage pools, coordinated via various interconnects and network protocols.
  • a distributed IPU mesh network 840 enables a variety of coordinated and distributed workload processing operations. For instance, a first IPU (e.g., IPU 844 A) at a first node or system (e.g., base station 822 A) may invoke additional compute resources at a second node or system (e.g., compute resources 842 B at base station 822 ) based on communications to a second IPU (e.g., IPU 844 B).
  • workloads, workload tasks, processing operations, and other related concepts may be distributed across the IPU mesh network 840 based on the performance characteristics and coordination properties discussed herein.
  • FIG. 9 depicts a task graph 900 , providing an example for illustrating the present techniques for optimization of concurrent task execution.
  • the task graph 900 is shown as including various tasks including task 901 labeled , task 911 labeled , task 912 labeled , task 913 labeled , task 921 labeled , task 931 labeled , task 932 labeled , task 933 labeled .
  • the arrows in the task graph edges indicate either control or data dependencies (or, both dependencies), which means, for example, that tasks , , wait for completion of task and task waits for tasks , , to complete, while tasks , , wait for task to complete.
  • Task in particular is at a join point in this task graph 900 .
  • Join points are common and are a natural consequence of extracting as much parallelism as possible from a set of computations (such as from parallel computations on tasks , , , in FIG. 9 ) and then having a point at which results from those parallel actions , , ) are merged, aggregated, reduced, or otherwise handled (in task for example).
  • merged, aggregated, reduced results often provide either control or data inputs to the next set of computations (such as tasks , , ).
  • FIG. 9 A common inefficiency that arises from join points is illustrated in FIG. 9 .
  • the task graph 900 is labeled to show the results of a hypothetical execution, indicating the units of time (e.g., milliseconds) that each of the different tasks took. For instance, this figure shows that task took 2 ms to execute, task took 5 ms to execute, and so forth. Because task needed to wait for , , to complete—in this case, the execution time task was stretched out until task was completed—the overall execution time took much longer (79 ms) compared to the execution times of and . This delay cascaded to other tasks , , that waited on .
  • Join points may cause an overall computation to be delayed from the worst case delays encountered in executions of preceding tasks. In a larger graph, these delays compound. Since it is inefficient to busy-wait for everyone else to come to a join point, schedulers, orchestrators, runtimes, or other entities often switch execution resources from a given task graph to some other usages (essentially multiplexing the resources) while a join point is not reached in that task graph. However, this has the effect of further amplifying delays in assigning processing resources and causing a loss of locality in various hardware and software managed caches.
  • P90/P95/P99 latencies refer to scenarios where some percentage, i.e., 90, 95, 99, of the requests are faster than a given latency.
  • more work cannot come into a system until previous work has been completed. For example, many interactive systems may delay formulating a new request based on results of a previous request. Thus, inordinate delays in reaching join points can cause both latency vulnerabilities and underutilized, overprovisioned resources and waste of power.
  • the following approaches provide an IPU-based or IPU-assisted, and software guided, tracking of progress in the execution of task graphs.
  • the approaches also provide for an IPU-based or IPU-assisted pinpointing of “laggard” or “straggler” tasks (i.e., tasks that take too long and hold up join points).
  • the approaches also provide IPU-based or IPU-assisted implementation of remediation measures when join-points take too long. This provides a tailored use of flexibilities in task scheduling to simultaneously maximize resource sharing across non-dependent tasks and minimize dilation of time between dependent tasks.
  • a variety of techniques may be used for refactoring or transforming of a task graph.
  • various approaches may be used to break a single computationally demanding task into a collection of smaller tasks ⁇ T 0 , 1 , . . . , k ⁇ in order to execute the smaller tasks ⁇ J
  • Other approaches may be applied under complementary areas of optimization related to Program Transformation techniques.
  • FIG. 10 depicts a workflow sequence 1000 for identifying and triggering remediation for concurrent execution bottlenecks.
  • actions are taken for triggering remediation, identifying what to remedy, and determining how to remedy.
  • This sequence 1000 is described based on the scenario in FIG. 11 , which depicts a further example of concurrent task execution in a task graph 1110 based on the tasks , , , , tasks 901 , 911 , 912 , 913 , 921 ) and the dependencies introduced above in FIG. 9 .
  • tasks each have to complete before task (task 921 ) can begin.
  • time index 07.00 the time for completing and then .
  • any additional wait for after time index 07.00 is in some way a measure of inefficiency in the scheduling of the task .
  • operation 1010 is performed to monitor time metrics (and, as applicable, perform evaluation or calculation of time metrics) for one or more tasks.
  • a waiting time metric begins to be monitored to identify the time that task is waiting. This monitoring can be performed in a lightweight manner by sampling across the task graph, for instance, at the different IPUs where tasks are either scheduled, or where precursor tasks have already been dispatched.
  • a task-graph wait-state can be defined at application level, and a task graph system (such as with Intel Threading Building Blocks, TensorFlow, Graph scheduler middleware, etc.). Such a task graph system can be used to aggregate this wait-state across all machinery (IPUs, CPUs, XPUs, etc).
  • operation 1020 is performed to determine a wait time relative to a threshold.
  • a threshold may refer to a fixed value, a calculated value, a range of values, or other comparable data item.
  • the determination of operation 1020 monitors for when an aggregated wait time penalty rises and crosses a defined threshold (e.g., a weighted and/or normalized threshold).
  • this defined threshold may be weighted by the number of completed tasks that are waiting for the remaining tasks at a join point in a task graph, and in a further example, this defined threshold may be normalized by the time that takes for the first or the second task to reach the join point.
  • the use of a weighted and normalized threshold is shown in the threshold evaluation 1130 .
  • a threshold is used to trigger a type of QoS “alarm”. This threshold can be triggered if, for example, the total aggregated time for the tasks is X for a workload and the current amount of waiting time for the task that is blocking the other tasks is >0.1*X, where the threshold is set to 0.10 (10 percent).
  • a remediation may be initiated, including by selecting one or more tasks (or, groups or types of applicable tasks) and then expediting their execution.
  • this evaluation of the threshold may be performed by a software orchestrator, scheduler, or the like. Such evaluation operations may run on a CPU, or may be offloaded to one or more IPUs which can locally aggregate the wait times coming in from other IPUs.
  • operation 1030 is performed to identify characteristics of the particular task(s) for remediation, thus identifying what aspect of the task execution is problematic.
  • the system can identify which executing task or tasks (or, groups or types of tasks) should be remediated so that they can be completed sooner (versus taking no action) (e.g., as in operation 1131 ).
  • the identification of what task to remedy can be performed using various approaches.
  • this includes identifying the task(s) that have not yet reached the join point, and from these tasks, identifying those tasks that have exhibited the slowest progress. Progress may be measured by such rates as #instructions-retired/wall-clock-time, #system-calls-performed/wall-clock-time, #network-messages-sent-and-received/wall-clock-time.
  • an application cohort or a model can provide estimates for a count of monitorable operations of some type, and application telemetry can provide the fraction of those operations that have completed at a given point in time. Further, this may allow selection of those task or tasks whose fraction of operations completed is among the “K”—lowest fractions at a given time.
  • K may be a specified static fraction, or a recommendation formula for “K” may specify K as a function of how much weighted wait-time has accumulated.
  • a laggard task (e.g., identified in operation 1132 ), as used herein, is a task that according to one of the approaches in operation 1030 , which needs to increase its pace of execution in order to reach the join point sooner that it would otherwise take.
  • remediation e.g., by expediting the laggard task(s)
  • the remediation is performed (e.g., expediting execution as in operation 1133 ).
  • Remediation may be accomplished by one of several alternatives. For instance, remediation may be performed by use of one or more of the following approaches.
  • FBI Fluorescence Infrastructure.
  • FBI refers to a small portion of high performance infrastructure that is allocated on an ad-hoc basis for short durations, to jobs or tasks that need to run with fewest possible bandwidth or power constraints. Jobs or tasks all assigned to this infrastructure only if they are identified (e.g., in operation 1040 ), and are movable to such infrastructure.
  • Such fallback infrastructure may be temporarily or permanently used, and logic may be used to determine how to reverse the use of the fallback infrastructure.
  • DEA Deferred execution arrangement(s).
  • DEA refers to arrangements that apply to a category of tasks that have no forward dependencies on their completion times. Jobs or tasks on which no other jobs or tasks depend are also referred to as singletons. Under these arrangements, such tasks can be run whenever FBI (as above) resources are underutilized, including during times where such resources are entirely idle. Similarly, such jobs can be de-scheduled or suspended (pre-empted) for durations of time even when they are not running on ordinary (non-FBI infrastructure).
  • CIA Computer-if-available jobs.
  • CIA jobs refers to a set of jobs that can be subject to DEA for defined periods of time. Frequently these may be jobs that are submitted and run as batch-jobs, and while they may have overall SLAs associated with them, the amount of slack that is available within those SLAs make it possible for such jobs to have long durations of time during which they can be run on a preemptable basis.
  • HCR Hardware-assisted checkpointing/resume.
  • HCR refers to a capability engineered into IPUs to perform background checkpointing of jobs (tasks) so that they can be efficiently migrated (without burdening CPUs) from execution into a suspended state at a given execution host, or which can be reactivated by one or more IPUs from being in a suspended state at one execution host to a ready-to-run state (i.e., resumed) at the same or a different execution host.
  • HCR is enabled and used for task duplication at FBI resources, in parallel with the ongoing execution.
  • FBI resources are pre-reserved for prioritized allocation. While FBI resources are not being used for dealing with tail latency outliers, it is furnished for low-priority/best-effort/preemptable execution of other singleton (such as CIA tasks) under Deferred-execution-arrangements (DEA). These can be registered ahead of time and funneled by the IPUs to the FBI.
  • DEA Deferred-execution-arrangements
  • the job or task when a job or task is assigned to the FBI, the job or task may be classified under one of four categories.
  • Category one a first category is a singleton task that just absorbs surplus utilization at an FBI host—instead of such time or power being wasted. It is capable of being preempted for arbitrary durations, and may also be transferred out of FBI and assigned by an IPU to any other normal execution host that has low utilization.
  • Category two a second category is a singleton task that should not be preempted ordinarily, but may be asked to yield; and it yields within a short, well-defined interval so that it can minimize the amount of state that needs to be saved or restored. In general, because such tasks require very little state to be saved or restored when yielding voluntarily, they may be preferred over ordinary CIA tasks at the FBI nodes.
  • Category three a third category is priority tasks that are preemptable: tasks may be temporarily assigned to FBI because they need to be sped up (e.g., as discussed in operations 1030 , 1040 , above). However, barring a few exceptions, a priority task may be preemptable or may be capable of yielding. Such tasks may be preempted when even they need to be de-scheduled temporarily in order to make room for tasks that have even higher priority, or as determined by a scheduler which may time-share the fall back infrastructure across a limited number of equally high priority tasks.
  • Category four is priority tasks that are non-preemptable: Certain tasks that are particularly heavy in their computation demands, but are overall rare in occurrence may be permanently assigned to FBI to ensure that they can complete in the minimum duration of time possible. The identification of such tasks may be based not on instantaneous selection (e.g., in operations 1030 , 1040 , above), but on the basis of long-term (historical) observations about their past performance on non-FBI infrastructure. Alternatively, the tasks may be identified by an entity that creates a task graph or estimates task execution times based on various parameters such as the volume or velocity of inputs.
  • remediation has been effected (e.g., in operation 1050 )
  • the next downstream task from such a laggard task (straggler) when launched, implicitly begins to be tracked for up to a short time. If the rate of progress is high, then the remediation can be identified as complete. If the rate of progress is low, then another remediation or a variation of the remediation may be applied (including, repeating the remediation steps above).
  • EDF earliest deadline first scheduling.
  • EDF is used primarily on a single node (host) to schedule jobs/tasks with the objective of achieving low latency response times, and is complementary to this proposal as follows.
  • the actions proposed above e.g., in operations 1010 - 1040 ) may precede the application of EDF by selecting an artificially low (near-term) deadline for the identified (to-be-expedited) tasks, without migrating the identified tasks to FBI.
  • the amount of infrastructure capacity at any given node may not be sufficient to expedite a task-to-remedied locally itself.
  • HCR can be used to automatically migrate such a task to FBI to migrate some other contending task of high priority to the FBI (e.g., so that the task in question can be given a larger slice of local resources).
  • FIG. 12 provides a flowchart of an example method 1200 for optimizing concurrent execution of workload tasks, in a distributed computing environment.
  • the method 1200 may be implemented by one or more networked processing units, and instructions embodied thereon to be executed by the networked processing unit(s), consistent with the examples of networked processing units (e.g., IPUs), as discussed above. Further, these operations may be performed in a scenario where multiple tasks of a workload are distributed among multiple compute locations of the distributed computing environment.
  • operations are performed to identify multiple tasks and processing dependencies within a computing workload.
  • the operations may include identifying and tracking multiple tasks of a computing workload that includes processing dependencies among the tasks, in a scenario where two or more of the tasks are executed concurrently.
  • identifying the multiple tasks of the workload includes splitting the workload into the multiple tasks.
  • operations are performed to monitor or evaluate execution time for each task of the computing workload.
  • the execution time is monitored or tracked relative to a respective execution time threshold applicable for each of the tasks (or, for a group of the tasks).
  • operations are performed to calculate the execution threshold for each task of the computing workload (including, repeating the calculation and monitoring on an ongoing basis, relative to the execution threshold).
  • calculating the execution time threshold for the particular task includes weighing an execution time threshold by an amount of waiting time elapsed for at least one completed task to reach the join point (and, wait for the particular task).
  • the threshold is calculated for certain types or groups of tasks. Other examples for weighting and normalizing values relative to a threshold are discussed above.
  • operations are performed to identify the execution time of a particular task as exceeding an execution time threshold for the particular task.
  • this particular task provides an input to a dependent task that is a join point of the workload.
  • the dependent task may receive a control input or data input from the particular task and at least one previous task of the workload.
  • a remediation (as discussed below) is identified and applied based on determining that the dependent task is a joint point of the workload.
  • operations are performed to determine a remediation based on the particular task and the identified execution time (e.g., based on the identified execution time meeting, exceeding, or crossing some threshold value that is dynamically calculated for the task as in 1240 ).
  • the remediation includes use of fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time.
  • the use of the fallback compute infrastructure may include use of hardware-assisted resumption, to temporarily (or, permanently) migrate the particular task from a first compute location to a second compute location in the distributed computing environment.
  • the use of the fallback compute infrastructure may also or alternatively include the use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, with the use of the deferred execution arrangement being coordinated during underutilization of the fallback compute infrastructure.
  • the use of the fallback compute infrastructure may also or alternatively include (or be based on) a classification of the remediation, with such classification provided from among a plurality of priority categories according to the particular task.
  • operations are performed to apply the remediation (alternately, to instantiate the remediation, to invoke the remediation, or to cause the remediation to be applied), and to monitor the results of the remediation, such as to increase the speed of execution of the workload.
  • other types of remediation may be applied which are not directly time-based (e.g., to reduce costs to a client, to distribute resources, or to reduce overall network latency).
  • the method 1200 is performed by a first networked processing unit operating as an orchestrator or scheduler of the workload, and the remediation for the particular task is implemented with use of a second networked processing unit (e.g., coordinated or under the control of the first networked processing unit).
  • the particular task being remediated may be executed by a first set of compute resources, while the remediation includes use of a second set of compute resources associated with the second networked processing unit.
  • method 1200 may be executed in a variety of situations and conditions, including continuously in some examples.
  • continuous execution of the method 1000 may enable a mesh network to continuously adapts to changing network and compute node conditions.
  • the execution of the method 1000 may be triggered or controlled by other scenarios and situations.
  • the techniques of the method 1200 may be enhanced by a variety of other data and telemetry analysis operations not directly illustrated in the flowchart of FIG. 12 .
  • artificial intelligence or other trained data analysis techniques may be used to identify trends, windows, thresholds, or active/inactive periods, for planning and orchestration and remediation actions as discussed above.
  • a first extension relates to the use of a task-graph registration interface.
  • a task-graph registration interface may be used to identify join points by an application or application-cohort. This interface may be used for task graphs in which a common task is downstream of multiple other tasks (such as task that is downstream of tasks , , and in FIGS. 9 and 10 ) but where that common task is wait-free. The common task can be initiated as soon as one task—or, a proper subset of tasks preceding it—complete. Such tasks may be explicitly identified as non-join points.
  • a second extension relates to the use of the previous extension task-graph registration interface, with an application or application-cohort providing (e.g., registering) a model by which the mean and variance of task completion times can be estimated.
  • this enhanced registration interface may estimate times on a reference CPU (or XPU, i.e., other type of processing unit) with various simplifying assumptions such as, the assumption of no-noisy neighbors, no network congestion, etc. These estimated completion times provide a way of predicting reasonable completion times for various tasks on a given infrastructure, and enable a comparison with other completed tasks (e.g., how much they deviated from estimated times).
  • a third extension relates to a call-back interface.
  • this call-back interface enables an orchestrator or a scheduler to call into an application or an application cohort, in order to obtain application telemetry.
  • Such telemetry may provide an application's estimate of the progress rate of a task in the application (i.e., an interface to indicate how much progress or processing has been accomplished).
  • a fourth extension relates to an interface to pre-reserve time durations on FBI nodes. This interface can be used to perform pre-scheduled
  • HCR migrations if they become necessary (in some scenarios, only if they become necessary).
  • a fifth extension provides additional forms of remediation, such as include task-internal self-remediation.
  • self-remediation may include an ability to perform a task or tasks at a lower accuracy so that it can deliver an approximate result (e.g., lower precision, or higher statistical variance).
  • self-remediation may include an ability to return a less recently updated statistic, as the substitution of a time-series projected estimate of a result that is repeatedly or periodically computed.
  • the exploration space for producing recommendations may be very large, while a stratified sample across the recommendations may be exponentially smaller in size, and still meet a high enough quality goal for producing top-N recommendations for small enough N.
  • a coordinator IPU may provide a task-graph interface by which the implementation of a task in a task-graph can request progressive updates for wait times and accordingly tailor its result computation strategy.
  • a sixth extension provides remediation that enables more than one type of specialization for a fallback infrastructure.
  • remediation is not limited to high throughput/low-latency compute, but also to different types of fallback infrastructure for different combinations of performance characteristics.
  • Some of the relevant performance characteristics that are considered may include: CPU speed, cache sizes, memory capacities, memory and I/O bandwidths, GPU capabilities, and the like.
  • a seventh extension provides the broadcast of a back-pressure signal as FBI resources saturate. This enables task graphs with larger numbers of aggregate numbers of joins (e.g., weighted sum of join points with weights proportional to in-degrees) to receive higher priorities and earlier scheduling than others.
  • Example 1 is a method for task management of a workload in a distributed computing environment, comprising: identifying multiple tasks of a computing workload, wherein the workload includes processing dependencies among the tasks, and wherein two or more of the tasks are executed concurrently; monitoring or evaluating an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks; identifying the execution time of a particular task as exceeding an execution time threshold for the particular task; determining a remediation based on the particular task and the identified execution time, the remediation including use of other compute resources in the distributed computing environment; and applying or causing the remediation to increase speed of execution of the workload.
  • Example 2 the subject matter of Example 1 optionally includes wherein the particular task provides an input to a dependent task, and wherein the dependent task is a join point of the workload that receives a control input or data input from the particular task and at least one previous task of the workload.
  • Example 3 the subject matter of Example 2 optionally includes wherein the remediation is applied in response to determining that the dependent task is a join point of the workload.
  • Example 4 the subject matter of any one or more of Examples 2-3 optionally include the method further comprising: calculating the execution time threshold for the particular task, wherein the execution time threshold is weighted by an amount of waiting time elapsed for at least one completed task to reach the join point and wait for the particular task.
  • Example 5 the subject matter of any one or more of Examples 1-4 optionally include wherein identifying the multiple tasks of the workload comprises splitting the workload into the multiple tasks, and wherein the method further comprises distributing the multiple tasks among multiple compute locations of the distributed computing environment.
  • Example 6 the subject matter of any one or more of Examples 1-5 optionally include wherein the remediation includes use of fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time.
  • Example 7 the subject matter of Example 6 optionally includes wherein the use of the fallback compute infrastructure includes use of hardware-assisted resumption, to migrate the particular task from a first compute location to a second compute location in the distributed computing environment.
  • Example 8 the subject matter of any one or more of Examples 6-7 optionally include wherein the use of the fallback compute infrastructure includes use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, and wherein the use of the deferred execution arrangement is coordinated during underutilization of the fallback compute infrastructure.
  • Example 9 the subject matter of any one or more of Examples 6-8 optionally include wherein the use of the fallback compute infrastructure is based on a classification of the remediation, the classification provided from among a plurality of priority categories according to the particular task.
  • Example 10 the subject matter of any one or more of Examples 1-9 optionally include wherein the method is performed by a first networked processing unit operating as an orchestrator or scheduler of the workload, and wherein the remediation for the particular task is implemented with use of a second networked processing unit.
  • Example 11 the subject matter of Example 10 optionally includes wherein the particular task is executed by a first set of compute resources, and wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit.
  • Example 12 is a device, comprising: a networked processing unit, the networked processing unit connected to a distributed computing environment via a network; and a storage medium including instructions embodied thereon, wherein the instructions, which when executed by the networked processing unit, configure the networked processing unit to: identify multiple tasks of a computing workload, wherein the workload includes processing dependencies among the tasks, and wherein two or more of the tasks are executed concurrently; monitor or evaluate an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks; identifying the execution time of a particular task as exceeding an execution time threshold for the particular task; determining a remediation based on the particular task and the identified execution time, the remediation including use of other compute resources in the distributed computing environment; and applying or causing the remediation to increase speed of execution of the workload.
  • Example 13 the subject matter of Example 12 optionally includes wherein the particular task provides an input to a dependent task, and wherein the dependent task is a join point of the workload that receives a control input or data input from the particular task and at least one previous task of the workload.
  • Example 14 the subject matter of Example 13 optionally includes wherein the remediation is applied in response to determining that the dependent task is a join point of the workload.
  • Example 15 the subject matter of any one or more of Examples 13-14 optionally include the instructions further to configure the networked processing unit to: calculate the execution time threshold for the particular task, wherein the execution time threshold is weighted by an amount of waiting time elapsed for at least one completed task to reach the join point and wait for the particular task.
  • Example 16 the subject matter of any one or more of Examples 12-15 optionally include wherein to identify the multiple tasks of the workload is performed in response to splitting the workload into the multiple tasks, and wherein the instructions further configure the networked processing unit to cause the multiple tasks to be distributed among multiple compute locations of the distributed computing environment.
  • Example 17 the subject matter of any one or more of Examples 12-16 optionally include wherein the remediation includes causing fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time.
  • Example 18 the subject matter of Example 17 optionally includes wherein use of the fallback compute infrastructure includes use of hardware-assisted resumption, to migrate the particular task from a first compute location to a second compute location in the distributed computing environment.
  • Example 19 the subject matter of any one or more of Examples 17-18 optionally include wherein use of the fallback compute infrastructure includes use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, and wherein the use of the deferred execution arrangement is coordinated during underutilization of the fallback compute infrastructure.
  • Example 20 the subject matter of any one or more of Examples 17-19 optionally include wherein use of the fallback compute infrastructure is based on a classification of the remediation, the classification provided from among a plurality of priority categories according to the particular task.
  • Example 21 the subject matter of any one or more of Examples 12-20 optionally include wherein the networked processing unit operates as an orchestrator or scheduler of the workload, and wherein the remediation for the particular task is implemented with use of a second networked processing unit connected via the network.
  • Example 22 the subject matter of Example 21 optionally includes wherein the particular task is executed by a first set of compute resources associated with the device, and wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit.
  • Example 23 is a machine-readable medium (e.g., a non-transitory storage medium) comprising information (e.g., data) representative of instructions, wherein the instructions, when executed by processing circuitry, cause the processing circuitry to perform, implement, or deploy any of Examples 1-22.
  • information e.g., data
  • Example 24 is an apparatus of an edge computing system comprising means to implement any of Examples 1-23, or other subject matter described herein.
  • Example 25 is an apparatus of an edge computing system comprising logic, modules, circuitry, or other means to implement any of Examples 1-23, or other subject matter described herein.
  • Example 26 is a networked processing unit (e.g., an infrastructure processing unit as discussed here) or system including a networked processing unit, configured to implement any of Examples 1-23, or other subject matter described herein.
  • a networked processing unit e.g., an infrastructure processing unit as discussed here
  • system including a networked processing unit configured to implement any of Examples 1-23, or other subject matter described herein.
  • Example 27 is an edge computing system, including respective edge processing devices and nodes to invoke or perform any of the operations of Examples 1-23, or other subject matter described herein.
  • Example 28 is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of any Examples 1-23, or other subject matter described herein.
  • Example 29 is a system to implement any of Examples 1-28.
  • Example 30 is a method to implement any of Examples 1-28.

Abstract

Various approaches for managing distributed compute operations for workload execution of concurrent tasks, including with the use of infrastructure processing units (IPUs) and similar networked processing units, are disclosed. An example method may include: identifying multiple tasks of a computing workload, for a workload that provides processing dependencies among the tasks, and that uses concurrent execution with one or more of the tasks; monitoring an execution time for each of the tasks, relative to an execution time threshold for each of the tasks; identifying the execution time of a particular task as exceeding an execution time threshold for the particular task; determining a remediation based on the particular task and the identified execution time, with the remediation including use of other compute resources in the distributed computing environment for the workload; and applying the remediation to increase speed of execution of the workload.

Description

    PRIORITY CLAIM
  • This application claims the benefit of priority to U.S. Provisional Patent Application No. 63/425,857, filed Nov. 16, 2022, and titled “COORDINATION OF DISTRIBUTED NETWORKED PROCESSING UNITS”, which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • Embodiments described herein generally relate to data processing, network communication, and communication system implementations of distributed computing, including the implementations with the use of networked processing units such as infrastructure processing units (IPUs) or data processing units (DPUs).
  • BACKGROUND
  • System architectures are moving to highly distributed multi-edge and multi-tenant deployments. Deployments may have different limitations in terms of power and space. Deployments also may use different types of compute, acceleration, and storage technologies in order to overcome these power and space limitations. Deployments also are typically interconnected in tiered and/or peer-to-peer fashion, in an attempt to create a network of connected devices and edge appliances that work together.
  • Edge computing, at a general level, has been described as systems that provide the transition of compute and storage resources closer to endpoint devices at the edge of a network (e.g., consumer computing devices, user equipment, etc.). As compute and storage resources are moved closer to endpoint devices, a variety of advantages have been promised such as reduced application latency, improved service capabilities, improved compliance with security or data privacy requirements, improved backhaul bandwidth, improved energy consumption, and reduced cost. However, many deployments of edge computing technologies—especially complex deployments for use by multiple tenants—have not been fully adopted.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings in which:
  • FIG. 1 illustrates an overview of a distributed edge computing environment, according to an example;
  • FIG. 2 depicts computing hardware provided among respective deployment tiers in a distributed edge computing environment, according to an example;
  • FIG. 3 depicts additional characteristics of respective deployments tiers in a distributed edge computing environment, according to an example;
  • FIG. 4 depicts a computing system architecture including a compute platform and a network processing platform provided by an infrastructure processing unit, according to an example;
  • FIG. 5 depicts an infrastructure processing unit arrangement operating as a distributed network processing platform within network and data center edge settings, according to an example;
  • FIG. 6 depicts functional components of an infrastructure processing unit and related services, according to an example;
  • FIG. 7 depicts a block diagram of example components in an edge computing system which implements a distributed network processing platform, according to an example;
  • FIG. 8 depicts an arrangement of distributed processing provided at an edge computing network layer, according to an example;
  • FIG. 9 depicts a task graph illustrating scenarios for optimization of concurrent task execution, according to an example;
  • FIG. 10 depicts a workflow sequence for identifying and triggering remediation for concurrent execution bottlenecks, according to an example;
  • FIG. 11 depicts a further example scenario of concurrent task execution, according to an example; and
  • FIG. 12 depicts a flowchart of an example method for optimizing concurrent execution of workload tasks, according to an example.
  • DETAILED DESCRIPTION
  • The following introduces various techniques to deploy, identify, manage, and respond to concurrent execution of tasks, including to optimize join points for such tasks. Such optimization may provide significant advantages in a distributed compute environment (such as using the distributed IPU architecture discussed in the following paragraphs). Among other benefits, such techniques enable improved power efficiency by selectively applying increased power or resources only for tasks that need such capability. Further, the following provides a precise identification of a task to be moved, re-deployed, or replicated, without needing to waste computation or power resources. Additional details on such optimization techniques are provided after a discussion of distributed edge computing scenarios.
  • FIG. 1 is a block diagram 100 showing an overview of a distributed edge computing environment, which may be adapted for implementing the present techniques for distributed networked processing units. As shown, the edge cloud 110 is established from processing operations among one or more edge locations, such as a satellite vehicle 141, a base station 142, a network access point 143, an on premise server 144, a network gateway 145, or similar networked devices and equipment instances. These processing operations may be coordinated by one or more edge computing platforms 120 or systems that operate networked processing units (e.g., IPUs, DPUs) as discussed herein.
  • The edge cloud 110 is generally defined as involving compute that is located closer to endpoints 160 (e.g., consumer and producer data sources) than the cloud 130, such as autonomous vehicles 161, user equipment 162, business and industrial equipment 163, video capture devices 164, drones 165, smart cities and building devices 166, sensors and IoT devices 167, etc. Compute, memory, network, and storage resources that are offered at the entities in the edge cloud 110 can provide ultra-low or improved latency response times for services and functions used by the endpoint data sources as well as reduce network backhaul traffic from the edge cloud 110 toward cloud 130 thus improving energy consumption and overall network usages among other benefits.
  • Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer end point devices than at a base station or a central office data center). As a general design principle, edge computing attempts to minimize the number of resources needed for network services, through the distribution of more resources that are located closer both geographically and in terms of in-network access time.
  • FIG. 2 depicts examples of computing hardware provided among respective deployment tiers in a distributed edge computing environment. Here, one tier at an on-premise edge system is an intelligent sensor or gateway tier 210, which operates network devices with low power and entry-level processors and low-power accelerators. Another tier at an on-premise edge system is an intelligent edge tier 220, which operates edge nodes with higher power limitations and may include a high-performance storage.
  • Further in the network, a network edge tier 230 operates servers including form factors optimized for extreme conditions (e.g., outdoors). A data center edge tier 240 operates additional types of edge nodes such as servers, and includes increasingly powerful or capable hardware and storage technologies. Still further in the network, a core data center tier 250 and a public cloud tier 260 operate compute equipment with the highest power consumption and largest configuration of processors, acceleration, storage/memory devices, and highest throughput network.
  • In each of these tiers, various forms of Intel® processor lines are depicted for purposes of illustration; it will be understood that other brands and manufacturers of hardware will be used in real-world deployments. Additionally, it will be understood that additional features or functions may exist among multiple tiers. One such example is connectivity and infrastructure management that enable a distributed IPU architecture, that can potentially extend across all of tiers 210, 220, 230, 240, 250, 260. Other relevant functions that may extend across multiple tiers may relate to security features, domain or group functions, and the like.
  • FIG. 3 depicts additional characteristics of respective deployment tiers in a distributed edge computing environment, based on the tiers discussed with reference to FIG. 2 . This figure depicts additional network latencies at each of the tiers 210, 220, 230, 240, 250, 260, and the gradual increase in latency in the network as the compute is located at a longer distance from the edge endpoints. Additionally, this figure depicts additional power and form factor constraints, use cases, and key performance indicators (KPIs).
  • With these variations and service features in mind, edge computing within the edge cloud 110 may provide the ability to serve and respond to multiple applications of the use cases in real-time or near real-time and meet ultra-low latency requirements. As systems have become highly-distributed, networking has become one of the fundamental pieces of the architecture that allow achieving scale with resiliency, security, and reliability. Networking technologies have evolved to provide more capabilities beyond pure network routing capabilities, including to coordinate quality of service, security, multi-tenancy, and the like. This has also been accelerated by the development of new smart network adapter cards and other type of network derivatives that incorporated capabilities such as ASICs (application-specific integrated circuits) or FPGAs (field programmable gate arrays) to accelerate some of those functionalities (e.g., remote attestation).
  • In these contexts, networked processing units have begun to be deployed at network cards (e.g., smart NICs), gateways, and the like, which allow direct processing of network workloads and operations. One example of a networked processing unit is an infrastructure processing unit (IPU), which is a programmable network device that can be extended to provide compute capabilities with far richer functionalities beyond pure networking functions. Another example of a network processing unit is a data processing unit (DPU), which offers programmable hardware for performing infrastructure and network processing operations. The following discussion refers to functionality applicable to an IPU configuration, such as that provided by an Intel® line of IPU processors. However, it will be understood that functionality will be equally applicable to DPUs and other types of networked processing units provided by ARM®, Nvidia®, and other hardware OEMs.
  • FIG. 4 depicts an example compute system architecture that includes a compute platform 420 and a network processing platform comprising an IPU 410. This architecture—and in particular the IPU 410-can be managed, coordinated, and orchestrated by the functionality discussed below, including with the functions described with reference to FIG. 6 .
  • The main compute platform 420 is composed by typical elements that are included with a computing node, such as one or more CPUs 424 that may or may not be connected via a coherent domain (e.g., via Ultra Path Interconnect (UPI) or another processor interconnect); one or more memory units 425; one or more additional discrete devices 426 such as storage devices, discrete acceleration cards (e.g., a field-programmable gate array (FPGA), a visual processing unit (VPU), etc.); a baseboard management controller 421; and the like. The compute platform 420 may operate one or more containers 422 (e.g., with one or more microservices), within a container runtime 423 (e.g., Docker containerd). The IPU 410 operates as a networking interface and is connected to the compute platform 420 using an interconnect (e.g., using either PCIe or CXL). The IPU 410, in this context, can be observed as another small compute device that has its own: (1) Processing cores (e.g., provided by low-power cores 417), (2) operating system (OS) and cloud native platform 414 to operate one or more containers 415 and a container runtime 416; (3) Acceleration functions provided by an ASIC 411 or FPGA 412; (4) Memory 418; (5) Network functions provided by network circuitry 413; etc.
  • From a system design perspective, this arrangement provides important functionality. The IPU 410 is seen as a discrete device from the local host (e.g., the OS running in the compute platform CPUs 424) that is available to provide certain functionalities (networking, acceleration etc.). Those functionalities are typically provided via Physical or Virtual PCIe functions. Additionally, the IPU 410 is seen as a host (with its own IP etc.) that can be accessed by the infrastructure to setup an OS, run services, and the like. The IPU 410 sees all the traffic going to the compute platform 420 and can perform actions—such as intercepting the data or performing some transformation—as long as the correct security credentials are hosted to decrypt the traffic. Traffic going through the IPU goes to all the layers of the OSI (Open Systems Interconnection) model stack (e.g., from physical to application layer). Depending on the features that the IPU has, processing may be performed at the transport layer only. However, if the IPU has capabilities to perform traffic intercept, then the IPU also may be able to intercept traffic at the traffic layer (e.g., intercept CDN traffic and process it locally).
  • Some of the use cases being proposed for IPUs and similar networked processing units include: to accelerate network processing; to manage hosts (e.g., in a data center); or to implement quality of service policies. However, most of functionalities today are focused at using the IPU at the local appliance level and within a single system. These approaches do not address how the IPUs could work together in a distributed fashion or how system functionalities can be divided among the IPUs on other parts of the system. Accordingly, the following introduces enhanced approaches for enabling and controlling distributed functionality among multiple networked processing units. This enables the extension of current IPU functionalities to work as a distributed set of IPUs that can work together to achieve stronger features such as, resiliency, reliability, etc.
  • Distributed Architectures of IPUs
  • FIG. 5 depicts an IPU arrangement operating as a distributed network processing platform within network and data center edge settings. In a first deployment model of a computing environment 510, workloads or processing requests are directly provided to an IPU platform, such as directly to IPU 514. In a second deployment model of the computing environment 510, workloads or processing requests are provided to some intermediate processing device 512, such as a gateway or NUC (next unit of computing) device form factor, and the intermediate processing device 512 forwards the workloads or processing requests to the IPU 514. It will be understood that a variety of other deployment models involving the composability and coordination of one or more IPUs, compute units, network devices, and other hardware may be provided.
  • With the first deployment model, the IPU 514 directly receives data from use cases 502A. The IPU 514 operates one or more containers with microservices to perform processing of the data. As an example, a small gateway (e.g., a NUC type of appliance) may connect multiple cameras to an edge system that is managed or connected by the IPU 514. The IPU 514 may process data as a small aggregator of sensors that runs on the far edge, or may perform some level of inline or preprocessing and that sends payload to be further processed by the IPU or the system that the IPU connects.
  • With the second deployment model, the intermediate processing device 512 provided by the gateway or NUC receives data from use cases 502B. The intermediate processing device 512 includes various processing elements (e.g., CPU cores, GPUs), and may operate one or more microservices for servicing workloads from the use cases 502B. However, the intermediate processing device 512 invokes the IPU 514 to complete processing of the data.
  • In either the first or the second deployment model, the IPU 514 may connect with a local compute platform, such as that provided by a CPU 516 (e.g., Intel® Xeon CPU) operating multiple microservices. The IPU may also connect with a remote compute platform, such as that provided at a data center by CPU 540 at a remote server. As an example, consider a microservice that performs some analytical processing (e.g., face detection on image data), where the CPU 516 and the CPU 540 provide access to this same microservice. The IPU 514, depending on the current load of the CPU 516 and the CPU 540, may decide to forward the images or payload to one of the two CPUs. Data forwarding or processing can also depend on other factors such as SLA for latency or performance metrics (e.g., perf/watt) in the two systems. As a result, the distributed IPU architecture may accomplish features of load balancing.
  • The IPU in the computing environment 510 may be coordinated with other network-connected IPUs. In an example, a Service and Infrastructure orchestration manager 530 may use multiple IPUs as a mechanism to implement advanced service processing schemes for the user stacks. This may also enable implementing of system functionalities such as failover, load balancing etc.
  • In a distributed architecture example, IPUs can be arranged in the following non-limiting configurations. As a first configuration, a particular IPU (e.g., IPU 514) can work with other IPUs (e.g., IPU 520) to implement failover mechanisms. For example, an IPU can be configured to forward traffic to service replicas that runs on other systems when a local host does not respond.
  • As a second configuration, a particular IPU (e.g., IPU 514) can work with other IPUs (e.g., IPU 520) to perform load balancing across other systems. For example, consider a scenario where CDN traffic targeted to the local host is forwarded to another host in case that I/O or compute in the local host is scarce at a given moment.
  • As a third configuration, a particular IPU (e.g., IPU 514) can work as a power management entity to implement advanced system policies. For example, consider a scenario where the whole system (e.g., including CPU 516) is placed in a C6 state (a low-power/power-down state available to a processor) while forwarding traffic to other systems (e.g., IPU 520) and consolidating it.
  • As will be understood, fully coordinating a distributed IPU architecture requires numerous aspects of coordination and orchestration.
  • The following examples of system architecture deployments provide discussion of how edge computing systems may be adapted to include coordinated IPUs, and how such deployments can be orchestrated to use IPUs at multiple locations to expand to the new envisioned functionality.
  • Distributed IPU Functionality
  • An arrangement of distributed IPUs offers a set of new functionalities to enable IPUs to be service focused. FIG. 6 depicts functional components of an IPU 610, including services and features to implement the distributed functionality discussed herein. It will be understood that some or all of the functional components provided in FIG. 6 may be distributed among multiple IPUs, hardware components, or platforms, depending on the particular configuration and use case involved.
  • In the block diagram of FIG. 6 , a number of functional components are operated to manage requests for a service running in the IPU (or running in the local host). As discussed above, IPUs can either run services or intercept requests arriving to services running in the local host and perform some action. In the latter case, the IPU can perform the following types of actions/functions (provided as a non-limiting examples).
  • Peer Discovery. In an example, each IPU is provided with Peer Discovery logic to discover other IPUs in the distributed system that can work together with it. Peer Discovery logic may use mechanisms such as broadcasting to discover other IPUs that are available on a network. The Peer Discovery logic is also responsible to work with the Peer Attestation and
  • Authentication logic to validate and authenticate the peer IPU' s identity, determine whether they are trustworthy, and whether the current system tenant allows the current IPU to work with them. To accomplish this, an IPU may perform operations such as: retrieve a proof of identity and proof of attestation; connect to a trusted service running in a trusted server; or, validate that the discovered system is trustworthy. Various technologies (including hardware components or standardized software implementations) that enable attestation, authentication, and security may be used with such operations.
  • Peer Attestation. In an example, each IPU provides interfaces to other IPUs to enable attestation of the IPU itself. IPU Attestation logic is used to perform an attestation flow within a local IPU in order to create the proof of identity that will be shared with other IPUs. Attestation here may integrate previous approaches and technologies to attest a compute platform. This may also involve the use of trusted attestation service 640 to perform the attestation operations.
  • Functionality Discovery. In an example, a particular IPU includes capabilities to discover the functionalities that peer IPUs provide. Once the authentication is done, the IPU can determine what functionalities that the peer IPUs provide (using the IPU Peer Discovery Logic) and store a record of such functionality locally. Examples of properties to discover can include: (i) Type of IPU and functionalities provided and associated KPIs (e.g. performance/watt, cost etc.); (ii) Available functionalities as well as possible functionalities to execute under secure enclaves (e.g., enclaves provided by Intel® SGX or TDX technologies); (iii) Current services that are running on the IPU and on the system that can potentially accept requests forwarded from this IPU; or (iv) Other interfaces or hooks that are provided by an IPU, such as: Access to remote storage; Access to a remote VPU; Access to certain functions. In a specific example, service may be described by properties such as: UUID; Estimated performance KPIs in the host or IPU; Average performance provided by the system during the N units of time (or any other type of indicator); and like properties.
  • Service Management. The IPU includes functionality to manage services that are running either on the host compute platform or in the IPU itself. Managing (orchestration) services includes performance service and resource orchestration for the services that can run on the IPU or that the IPU can affect. Two type of usage models are envisioned:
  • External Orchestration Coordination. The IPU may enable external orchestrators to deploy services on the IPU compute capabilities. To do so, an IPU includes a component similar to K8 compatible APIs to manage the containers (services) that run on the IPU itself. For example, the IPU may run a service that is just providing content to storage connected to the platform. In this case, the orchestration entity running in the IPU may manage the services running in the IPU as it happens in other systems (e.g. keeping the service level objectives).
  • Further, external orchestrators can be allowed to register to the IPU that services are running on the host may require to broker requests, implement failover mechanisms and other functionalities. For example, an external orchestrator may register that a particular service running on the local compute platform is replicated in another edge node managed by another IPU where requests can be forwarded.
  • In this later use case external orchestrators may provide to the Service/Application Intercept logic the inputs that are needed to intercept traffic for these services (as typically is encrypted). This may include properties such as a source and destination traffic of the traffic to be intercepted, or the key to use to decrypt the traffic. Likewise, this may be needed to terminate TLS to understand the requests that arrive to the IPU and that the other logics may need to parse to take actions. For example, if there is a CDN read request the IPU may need to decrypt the packet to understand that network packet includes a read request and may redirect it to another host based on the content that is being intercepted. Examples of Service/Application Intercept information is depicted in table 620 in FIG. 6 .
  • External Orchestration Implementation. External orchestration can be implemented in multiple topologies. One supported topology includes having the orchestrator managing all the IPUs running on the backend public or private cloud. Another supported topology includes having the orchestrator managing all the IPUs running in a centralized edge appliance. Still another supported topology includes having the orchestrator running in another IPU that is working as the controller or having the orchestrator running distributed in multiple other IPUs that are working as controllers (master/primary node), or in a hierarchical arrangement.
  • Functionality for Broker requests. The IPU may include Service Request Brokering logic and Load Balancing logic to perform brokering actions on arrival for requests of target services running in the local system. For instance, the IPU may decide to see if those requests can be executed by other peer systems (e.g., accessible through Service and Infrastructure Orchestration 630). This can be caused, for example, because load in the local systems is high. The local IPU may negotiate with other peer IPUs for the possibility to forward the request. Negotiation may involve metrics such as cost. Based on such negotiation metrics, the IPU may decide to forward the request.
  • Functionality for Load Balancing requests. The Service Request Brokering and Load Balancing logic may distribute requests arriving to the local IPU to other peer IPUs. In this case, the other IPUs and the local IPU work together and do not necessarily need brokering. Such logic acts similar to a cloud native sidecar proxy. For instance, requests arriving to the system may be sent to the service X running in the local system (either IPU or compute platform) or forwarded to a peer IPU that has another instance of service X running The load balancing distribution can be based on existing algorithms such as based on the systems that have lower load, using round robin, etc.
  • Functionality for failover, resiliency and reliability. The IPU includes Reliability and Failover logic to monitor the status of the services running on the compute platform or the status of the compute platform itself. The Reliability and Failover logic may require the Load Balancing logic to transiently or permanently forward requests that aim specific services in situations such as where: i) The compute platform is not responding; ii) The service running inside the compute node is not responding; and iii) The compute platform load prevents the targeted service to provide the right level of service level objectives (SLOs). Note that the logic must know the required SLOs for the services. Such functionality may be coordinated with service information 650 including SLO information.
  • Functionality for executing parts of the workloads. Use cases such as video analytics tend to be decomposed in different microservices that conform a pipeline of actions that can be used together. The IPU may include a workload pipeline execution logic that understands how workloads are composed and manage their execution. Workloads can be defined as a graph that connects different microservices. The load balancing and brokering logic may be able to understand those graphs and decide what parts of the pipeline are executed where. Further, to perform these and other operations, Intercept logic will also decode what requests are included as part of the requests.
  • Resource Management
  • A distributed network processing configuration may enable IPUs to perform important role for managing resources of edge appliances. As further shown in FIG. 6 , the functional components of an IPU can operate to perform these and similar types of resource management functionalities.
  • As a first example, an IPU can provide management or access to external resources that are hosted in other locations and expose them as local resources using constructs such as Compute Express Link (CXL). For example, the IPU could potentially provide access to a remote accelerator that is hosted in a remote system via CXL.mem/cache and JO. Another example includes providing access to remote storage device hosted in another system. In this later case the local IPU could work with another IPU in the storage system and expose the remote system as PCIE VF/PF (virtual functions/physical functions) to the local host.
  • As a second example, an IPU can provide access to IPU-specific resources. Those IPU resource may be physical (such as storage or memory) or virtual (such as a service that provides access to random number generation).
  • As a third example, an IPU can manage local resources that are hosted in the system where it belongs. For example, the IPU can manage power of the local compute platform.
  • As a fourth example, an IPU can provide access to other type of elements that relate to resources (such as telemetry or other types of data). In particular, telemetry provides useful data for something that is needed to decide where to execute things or to identify problems.
  • I/O Management. Because the IPU is acting as a connection proxy between the external peers (compute systems, remote storage etc.) resources and the local compute, the IPU can also include functionality to manage I/O from the system perspective.
  • Host Virtualization and XPU Pooling. The IPU includes Host Virtualization and XPU Pooling logic responsible to manage the access to resources that are outside the system domain (or within the IPU) and that can be offered to the local compute system. Here, “XPU” refers to any type of a processing unit, whether CPU, GPU, VPU, an acceleration processing unit, etc. The IPU logic, after discovery and attestation, can agree with other systems to share external resources with the services running in the local system. IPUs may advertise to other peers available resources or can be discovered during discovery phase as introduced earlier. IPUs may request to other IPUS to those resources. For example, an IPU on system A may request access to storage on system B manage by another IPU. Remote and local IPUs can work together to establish a connection between the target resources and the local system.
  • Once the connection and resource mapping is completed, resources can be exposed to the services running in the local compute node using the VF/PF PCIE and CXL Logic. Each of those resources can be offered as VF/PF. The IPU logic can expose to the local host resources that are hosted in the IPU. Examples of resources to expose may include local accelerators, access to services, and the like.
  • Power Management. Power management is one of the key features to achieve favorable system operational expenditures (OPEXs). IPU is very well positioned to optimize power consumption that the local system is consuming. The Distributed and local power management unit: Is responsible to meter the power that the system is consuming, the load that the system is receiving and track the service level agreements that the various services running in the system are achieving for the arriving requests. Likewise, when power efficiencies (e.g., power usage effectiveness (PUE)) are not achieving certain thresholds or the local compute demand is low, the IPU may decide to forward the requests to local services to other IPUs that host replicas of the services. Such power management features may also coordinate with the Brokering and Load Balancing logic discussed above. As will be understood, IPUs can work together to decide where requests can be consolidated to establish higher power efficiency as system. When traffic is redirected, the local power consumption can be reduced in different ways.
  • Example operations that can be performed include: changing the system to C6 State; changing the base frequencies; performing other adaptations of the system or system components.
  • Telemetry Metrics. The IPU can generate multiple types of metrics that can be interesting from services, orchestration or tenants owning the system. In various examples, telemetry can be accessed, including: (i) Out of band via side interfaces; (ii) In band by services running in the IPU; or (iii) Out of band using PCIE or CXL from the host perspective. Relevant types of telemetries can include: Platform telemetry; Service Telemetry; IPU telemetry; Traffic telemetry; and the like.
  • System Configurations for Distributed Processing
  • Further to the examples noted above, the following configurations may be used for processing with distributed IPUs:
  • 1) Local IPUs connected to a compute platform by an interconnect (e.g., as shown in the configuration of FIG. 4 );
  • 2) Shared IPUs hosted within a rack/physical network—such as in a virtual slice or multi-tenant implementation of IPUs connected via CXL/PCI-E (local), or extension via Ethernet/Fiber for nodes within a cluster;
  • 3) Remote IPUs accessed via an IP Network, such as within certain latency for data plane offload/storage offloads (or, connected for management/control plane operations); or
  • 4) Distributed IPUs providing an interconnected network of IPUs, including as many as hundreds of nodes within a domain.
  • Configurations of distributed IPUs working together may also include fragmented distributed IPUs, where each IPU or pooled system provides part of the functionalities, and each IPU becomes a malleable system. Configurations of distributed IPUs may also include virtualized IPUs, such as provided by a gateway, switch, or an inline component (e.g., inline between the service acting as IPU), and in some examples, in scenarios where the system has no IPU.
  • Other deployment models for IPUs may include IPU-to-IPU in the same tier or a close tier; IPU-to-IPU in the cloud (data to compute versus compute to data); integration in small device form factors (e.g., gateway IPUs); gateway/NUC+IPU which connects to a data center; multiple GW/NUC (e.g. 16) which connect to one IPU (e.g. switch); gateway/NUC+IPU on the server; and GW/NUC and IPU that are connected to a server with an IPU.
  • The preceding distributed IPU functionality may be implemented among a variety of types of computing architectures, including one or more gateway nodes, one or more aggregation nodes, or edge or core data centers distributed across layers of the network (e.g., in the arrangements depicted in FIGS. 2 and 3 ). Accordingly, such IPU arrangements may be implemented in an edge computing system by or on behalf of a telecommunication service provider (“telco”, or “TSP”), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities. Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives. Such edge computing systems may be embodied as a type of device, appliance, computer, or other “thing” capable of communicating with other edge, networking, or endpoint components.
  • FIG. 7 depicts a block diagram of example components in a computing device 750 which can operate as a distributed network processing platform. The computing device 750 may include any combinations of the components referenced above, implemented as integrated circuits (ICs), as a package or system-on-chip (SoC), or as portions thereof, discrete electronic devices, or other modules, logic, instruction sets, programmable logic or algorithms, hardware, hardware accelerators, software, firmware, or a combination thereof adapted in the computing device 750, or as components otherwise incorporated within a larger system. Specifically, the computing device 750 may include processing circuitry comprising one or both of a network processing unit 752 (e.g., an IPU or DPU, as discussed above) and a compute processing unit 754 (e.g., a CPU).
  • The network processing unit 752 may provide a networked specialized processing unit such as an IPU, DPU, network processing unit (NPU), or other “xPU” outside of the central processing unit (CPU). The processing unit may be embodied as a standalone circuit or circuit package, integrated within an SoC, integrated with networking circuitry (e.g., in a SmartNIC), or integrated with acceleration circuitry, storage devices, or AI or specialized hardware, consistent with the examples above.
  • The compute processing unit 754 may provide a processor as a central processing unit (CPU) microprocessor, multi-core processor, multithreaded processor, an ultra-low voltage processor, an embedded processor, or other forms of a special purpose processing unit or specialized processing unit for compute operations.
  • Either the network processing unit 752 or the compute processing unit 754 may be a part of a system on a chip (SoC) which includes components formed into a single integrated circuit or a single package. The network processing unit 752 or the compute processing unit 754 and accompanying circuitry may be provided in a single socket form factor, multiple socket form factor, or a variety of other formats.
  • The processing units 752, 754 may communicate with a system memory 756 (e.g., random access memory (RAM)) over an interconnect 755 (e.g., a bus). In an example, the system memory 756 may be embodied as volatile (e.g., dynamic random access memory (DRAM), etc.) memory. Any number of memory devices may be used to provide for a given amount of system memory. A storage 758 may also couple to the processor 752 via the interconnect 755 to provide for persistent storage of information such as data, applications, operating systems, and so forth. In an example, the storage 758 may be implemented as non-volatile storage such as a solid-state disk drive (SSD).
  • The components may communicate over the interconnect 755. The interconnect 755 may include any number of technologies, including industry-standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), Compute Express Link (CXL), or any number of other technologies. The interconnect 755 may couple the processing units 752, 754 to a transceiver 766, for communications with connected edge devices 762.
  • The transceiver 766 may use any number of frequencies and protocols. For example, a wireless local area network (WLAN) unit may implement Wi-Fi® communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, or a wireless wide area network (WWAN) unit may implement wireless wide area communications according to a cellular, mobile network, or other wireless wide area protocol. The wireless network transceiver 766 (or multiple transceivers) may communicate using multiple standards or radios for communications at a different range. A wireless network transceiver 766 (e.g., a radio transceiver) may be included to communicate with devices or services in the edge cloud 110 or the cloud 130 via local or wide area network protocols.
  • The communication circuitry (e.g., transceiver 766, network interface 768, external interface 770, etc.) may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., a cellular networking protocol such a 3GPP 4G or 5G standard, a wireless local area network protocol such as IEEE 802.11/Wi-Fi®, a wireless wide area network protocol, Ethernet, Bluetooth®, Bluetooth Low Energy, an IoT protocol such as IEEE 802.15.4 or ZigBee®, Matter®, low-power wide-area network (LPWAN) or low-power wide-area (LPWA) protocols, etc.) to effect such communication. Given the variety of types of applicable communications from the device to another component or network, applicable communications circuitry used by the device may include or be embodied by any one or more of components 766, 768, or 770. Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, etc.) may be embodied by such communications circuitry.
  • The computing device 750 may include or be coupled to acceleration circuitry 764, which may be embodied by one or more AI accelerators, a neural compute stick, neuromorphic hardware, an FPGA, an arrangement of GPUs, one or more SoCs, one or more CPUs, one or more digital signal processors, dedicated ASICs, or other forms of specialized processors or circuitry designed to accomplish one or more specialized tasks. These tasks may include AI processing (including machine learning, training, inferencing, and classification operations), visual data processing, network data processing, object detection, rule analysis, or the like. Accordingly, in various examples, applicable means for acceleration may be embodied by such acceleration circuitry.
  • The interconnect 755 may couple the processing units 752, 754 to a sensor hub or external interface 770 that is used to connect additional devices or subsystems. The devices may include sensors 772, such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, global navigation system (e.g., GPS) sensors, pressure sensors, pressure sensors, and the like. The hub or interface 770 further may be used to connect the edge computing node 750 to actuators 774, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like.
  • In some optional examples, various input/output (I/O) devices may be present within or connected to, the edge computing node 750. For example, a display or other output device 784 may be included to show information, such as sensor readings or actuator position. An input device 786, such as a touch screen or keypad may be included to accept input. An output device 784 may include any number of forms of audio or visual display, including simple visual outputs such as LEDs or more complex outputs such as display screens (e.g., LCD screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the edge computing node 750.
  • A battery 776 may power the edge computing node 750, although, in examples in which the edge computing node 750 is mounted in a fixed location, it may have a power supply coupled to an electrical grid, or the battery may be used as a backup or for temporary capabilities. A battery monitor/charger 778 may be included in the edge computing node 750 to track the state of charge (SoCh) of the battery 776. The battery monitor/charger 778 may be used to monitor other parameters of the battery 776 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery 776. A power block 780, or other power supply coupled to a grid, may be coupled with the battery monitor/charger 778 to charge the battery 776.
  • In an example, the instructions 782 on the processing units 752, 754 (separately, or in combination with the instructions 782 of the machine-readable medium 760) may configure execution or operation of a trusted execution environment (TEE) 790. In an example, the TEE 790 operates as a protected area accessible to the processing units 752, 754 for secure execution of instructions and secure access to data. Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in the edge computing node 750 through the TEE 790 and the processing units 752, 754.
  • The computing device 750 may be a server, appliance computing devices, and/or any other type of computing device with the various form factors discussed above. For example, the computing device 750 may be provided by an appliance computing device that is a self-contained electronic device including a housing, a chassis, a case, or a shell.
  • In an example, the instructions 782 provided via the memory 756, the storage 758, or the processing units 752, 754 may be embodied as a non-transitory, machine-readable medium 760 including code to direct the processor 752 to perform electronic operations in the edge computing node 750. The processing units 752, 754 may access the non-transitory, machine-readable medium 760 over the interconnect 755. For instance, the non-transitory, machine-readable medium 760 may be embodied by devices described for the storage 758 or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices. The non-transitory, machine-readable medium 760 may include instructions to direct the processing units 752, 754 to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality discussed herein. As used herein, the terms “machine-readable medium”, “machine-readable storage”, “computer-readable storage”, and “computer-readable medium” are interchangeable.
  • In further examples, a machine-readable medium also includes any tangible medium that is capable of storing, encoding, or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. A “machine-readable medium” thus may include but is not limited to, solid-state memories, and optical and magnetic media. The instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., HTTP).
  • A machine-readable medium may be provided by a storage device or other apparatus which is capable of hosting data in a non-transitory format. In an example, information stored or otherwise provided on a machine-readable medium may be representative of instructions, such as instructions themselves or a format from which the instructions may be derived. This format from which the instructions may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like. The information representative of the instructions in the machine-readable medium may be processed by processing circuitry into the instructions to implement any of the operations discussed herein. For example, deriving the instructions from the information (e.g., processing by the processing circuitry) may include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions.
  • In an example, the derivation of the instructions may include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions from some intermediate or preprocessed format provided by the machine-readable medium. The information, when provided in multiple parts, may be combined, unpacked, and modified to create the instructions. For example, the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers.
  • In further examples, a software distribution platform (e.g., one or more servers and one or more storage devices) may be used to distribute software, such as the example instructions discussed above, to one or more devices, such as example processor platform(s) and/or example connected edge devices noted above. The example software distribution platform may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices. In some examples, the providing entity is a developer, a seller, and/or a licensor of software, and the receiving entity may be consumers, users, retailers, OEMs, etc., that purchase and/or license the software for use and/or re-sale and/or sub-licensing.
  • In some examples, the instructions are stored on storage devices of the software distribution platform in a particular format. A format of computer readable instructions includes, but is not limited to a particular code language (e.g., Java, JavaScript, Python, C, C#, SQL, HTML, etc.), and/or a particular code state (e.g., uncompiled code (e.g., ASCII), interpreted code, linked code, executable code (e.g., a binary), etc.). In some examples, the computer readable instructions stored in the software distribution platform are in a first format when transmitted to an example processor platform(s). In some examples, the first format is an executable binary in which particular types of the processor platform(s) can execute. However, in some examples, the first format is uncompiled code that requires one or more preparation tasks to transform the first format to a second format to enable execution on the example processor platform(s). For instance, the receiving processor platform(s) may need to compile the computer readable instructions in the first format to generate executable code in a second format that is capable of being executed on the processor platform(s). In still other examples, the first format is interpreted code that, upon reaching the processor platform(s), is interpreted by an interpreter to facilitate execution of instructions.
  • Concurrent Execution Among Distributed Compute Locations
  • The following discussion refers to various examples of concurrent execution of workloads and workload tasks among distributed compute locations. With the use of a distributed IPU mesh network, optimized dataflows and low latencies can be provided for concurrently executed tasks. Among other benefits, the following data flow adaptations provide a technical improvement for the utilization of heterogeneous computation resources by reducing response times in closed-loop admission control systems.
  • FIG. 8 depicts an example arrangement of distributed processing provided at an edge computing network layer, using a distributed infrastructure processing unit mesh network. Specifically, FIG. 8 depicts a computing operations coordinated among a user layer 810, an edge layer 820, and a cloud layer 830. Consistent with the examples discussed above (e.g., with reference to FIGS. 1 to 3 ), edge computing operations may be performed at the edge layer 820 based on requests from client devices or consumers at the user layer 810, such as from one or more heterogenous networks 812, a vehicular network 814, a machine-to-machine (M2M) or device-to-device (D2D) network (not shown), or other network arrangements. The edge layer 820 may further invoke a cloud layer 830 and cloud services 832 to perform further data processing or data retrieval (e.g., at one or more remote data centers or offices).
  • A variety of disaggregated resources available in the edge layer 820 may be combined, pooled, or coordinated in order to perform tasks (e.g., executable portions or segments of one or multiple workloads) for clients and other consumers. For instance, resources at a first base station 822A, including compute resources 842A, may be coordinated with the compute resources 842B at a second base station 822B. Other types of resources, not shown, may include communication resources, storage and caching resources, and the like, provided among a variety of devices or nodes. The resources may be arranged into compute pools, memory pools, or storage pools, coordinated via various interconnects and network protocols.
  • The use of a distributed IPU mesh network 840 enables a variety of coordinated and distributed workload processing operations. For instance, a first IPU (e.g., IPU 844A) at a first node or system (e.g., base station 822A) may invoke additional compute resources at a second node or system (e.g., compute resources 842B at base station 822) based on communications to a second IPU (e.g., IPU 844B). Thus, workloads, workload tasks, processing operations, and other related concepts may be distributed across the IPU mesh network 840 based on the performance characteristics and coordination properties discussed herein.
  • FIG. 9 depicts a task graph 900, providing an example for illustrating the present techniques for optimization of concurrent task execution. Here, the task graph 900 is shown as including various tasks including task 901 labeled
    Figure US20230136612A1-20230504-P00001
    , task 911 labeled
    Figure US20230136612A1-20230504-P00002
    , task 912 labeled
    Figure US20230136612A1-20230504-P00003
    , task 913 labeled
    Figure US20230136612A1-20230504-P00004
    , task 921 labeled
    Figure US20230136612A1-20230504-P00005
    , task 931 labeled
    Figure US20230136612A1-20230504-P00006
    , task 932 labeled
    Figure US20230136612A1-20230504-P00007
    , task 933 labeled
    Figure US20230136612A1-20230504-P00008
    . The arrows in the task graph edges indicate either control or data dependencies (or, both dependencies), which means, for example, that tasks
    Figure US20230136612A1-20230504-P00009
    ,
    Figure US20230136612A1-20230504-P00010
    ,
    Figure US20230136612A1-20230504-P00011
    wait for completion of task
    Figure US20230136612A1-20230504-P00012
    and task
    Figure US20230136612A1-20230504-P00013
    waits for tasks
    Figure US20230136612A1-20230504-P00014
    ,
    Figure US20230136612A1-20230504-P00015
    ,
    Figure US20230136612A1-20230504-P00016
    to complete, while tasks
    Figure US20230136612A1-20230504-P00017
    ,
    Figure US20230136612A1-20230504-P00018
    ,
    Figure US20230136612A1-20230504-P00019
    wait for task
    Figure US20230136612A1-20230504-P00020
    to complete. Task
    Figure US20230136612A1-20230504-P00021
    in particular is at a join point in this task graph 900.
  • Join points are common and are a natural consequence of extracting as much parallelism as possible from a set of computations (such as from parallel computations on tasks
    Figure US20230136612A1-20230504-P00022
    ,
    Figure US20230136612A1-20230504-P00023
    ,
    Figure US20230136612A1-20230504-P00024
    , in FIG. 9 ) and then having a point at which results from those parallel actions
    Figure US20230136612A1-20230504-P00025
    ,
    Figure US20230136612A1-20230504-P00026
    ,
    Figure US20230136612A1-20230504-P00027
    ) are merged, aggregated, reduced, or otherwise handled (in task
    Figure US20230136612A1-20230504-P00028
    for example). In a join point, merged, aggregated, reduced results often provide either control or data inputs to the next set of computations (such as tasks
    Figure US20230136612A1-20230504-P00029
    ,
    Figure US20230136612A1-20230504-P00030
    ,
    Figure US20230136612A1-20230504-P00031
    ).
  • A common inefficiency that arises from join points is illustrated in FIG. 9 . Here, the task graph 900 is labeled to show the results of a hypothetical execution, indicating the units of time (e.g., milliseconds) that each of the different tasks took. For instance, this figure shows that task
    Figure US20230136612A1-20230504-P00032
    took 2 ms to execute, task
    Figure US20230136612A1-20230504-P00033
    took 5 ms to execute, and so forth. Because task
    Figure US20230136612A1-20230504-P00034
    needed to wait for
    Figure US20230136612A1-20230504-P00035
    ,
    Figure US20230136612A1-20230504-P00036
    ,
    Figure US20230136612A1-20230504-P00037
    to complete—in this case, the execution time task
    Figure US20230136612A1-20230504-P00038
    was stretched out until task
    Figure US20230136612A1-20230504-P00039
    was completed—the overall execution time took much longer (79 ms) compared to the execution times of
    Figure US20230136612A1-20230504-P00040
    and
    Figure US20230136612A1-20230504-P00041
    . This delay cascaded to other tasks
    Figure US20230136612A1-20230504-P00042
    ,
    Figure US20230136612A1-20230504-P00043
    ,
    Figure US20230136612A1-20230504-P00044
    that waited on
    Figure US20230136612A1-20230504-P00045
    .
  • This problem can also be considered in the context of a variety of production-scale clusters. For example, consider the real-world example of this problem discussed by Ananthanarayanan, Ganesh, et al. “Reining in the Outliers in {Map-Reduce} Clusters using Mantri.” 9th USENIX Symposium on Operating Systems Design and Implementation (OSDI 10) (2010). In this paper, there is discussion of how join points can occur during multiple processing operations, as a particular phrase of processing must wait for data to arrive from “stragglers”. Altogether, without stragglers, the entire job would have finished much earlier.
  • Join points may cause an overall computation to be delayed from the worst case delays encountered in executions of preceding tasks. In a larger graph, these delays compound. Since it is inefficient to busy-wait for everyone else to come to a join point, schedulers, orchestrators, runtimes, or other entities often switch execution resources from a given task graph to some other usages (essentially multiplexing the resources) while a join point is not reached in that task graph. However, this has the effect of further amplifying delays in assigning processing resources and causing a loss of locality in various hardware and software managed caches.
  • In some existing approaches, throughput optimization is permitted until P90/P95/P99 latencies cross some threshold, and at that point, resources are overcommitted rather than being oversubscribed in order to maintain predictable or bounded tail latencies. In this context, P90/P95/P99 latencies refer to scenarios where some percentage, i.e., 90, 95, 99, of the requests are faster than a given latency. In other existing approaches, more work cannot come into a system until previous work has been completed. For example, many interactive systems may delay formulating a new request based on results of a previous request. Thus, inordinate delays in reaching join points can cause both latency vulnerabilities and underutilized, overprovisioned resources and waste of power.
  • In the following paragraphs, optimization of concurrent execution is provided through approaches for identifying remediation and resolution of join points. In various examples, the following approaches provide an IPU-based or IPU-assisted, and software guided, tracking of progress in the execution of task graphs. The approaches also provide for an IPU-based or IPU-assisted pinpointing of “laggard” or “straggler” tasks (i.e., tasks that take too long and hold up join points). The approaches also provide IPU-based or IPU-assisted implementation of remediation measures when join-points take too long. This provides a tailored use of flexibilities in task scheduling to simultaneously maximize resource sharing across non-dependent tasks and minimize dilation of time between dependent tasks.
  • The following approaches are described with reference to an overall view of task management in an IPU-coordinated distributed compute setting, but it will be understood that a variety of communications, data values, and operations are needed to implement such task management. A variety of telemetry and computation graphs used to effect these operations may be managed by a respective IPU, for example. Likewise, various communications and communication patterns (including network telemetry) and analysis may support the following approaches.
  • Also, a variety of techniques, beyond those discussed below, may be used for refactoring or transforming of a task graph. For example, various approaches may be used to break a single computationally demanding task
    Figure US20230136612A1-20230504-P00046
    into a collection of smaller tasks {T
    Figure US20230136612A1-20230504-P00047
    0,
    Figure US20230136612A1-20230504-P00048
    1, . . . ,
    Figure US20230136612A1-20230504-P00049
    k} in order to execute the smaller tasks {
    Figure US20230136612A1-20230504-P00050
    J|0≤j≤k} concurrently on multiple resources so that the overall clock time for completing the tasks can be shortened. Other approaches may be applied under complementary areas of optimization related to Program Transformation techniques.
  • FIG. 10 depicts a workflow sequence 1000 for identifying and triggering remediation for concurrent execution bottlenecks. In the workflow sequence 1000, actions are taken for triggering remediation, identifying what to remedy, and determining how to remedy. This sequence 1000 is described based on the scenario in FIG. 11 , which depicts a further example of concurrent task execution in a task graph 1110 based on the tasks
    Figure US20230136612A1-20230504-P00051
    ,
    Figure US20230136612A1-20230504-P00052
    ,
    Figure US20230136612A1-20230504-P00053
    ,
    Figure US20230136612A1-20230504-P00054
    ,
    Figure US20230136612A1-20230504-P00055
    tasks 901, 911, 912, 913, 921) and the dependencies introduced above in FIG. 9 .
  • As previously noted, tasks
    Figure US20230136612A1-20230504-P00056
    Figure US20230136612A1-20230504-P00057
    Figure US20230136612A1-20230504-P00058
    ( tasks 901, 911, 912) each have to complete before task
    Figure US20230136612A1-20230504-P00059
    (task 921) can begin. Using the hypothetical completion times 1120 (also shown in FIG. 9 ), then at time index 07.00 (the time for completing
    Figure US20230136612A1-20230504-P00060
    and then
    Figure US20230136612A1-20230504-P00061
    ), it becomes necessary to begin waiting for
    Figure US20230136612A1-20230504-P00062
    to begin. In other words, any additional wait for
    Figure US20230136612A1-20230504-P00063
    after time index 07.00 is in some way a measure of inefficiency in the scheduling of the task
    Figure US20230136612A1-20230504-P00064
    .
  • First, operation 1010 is performed to monitor time metrics (and, as applicable, perform evaluation or calculation of time metrics) for one or more tasks. In the example of FIG. 11 , a waiting time metric begins to be monitored to identify the time that task
    Figure US20230136612A1-20230504-P00065
    is waiting. This monitoring can be performed in a lightweight manner by sampling across the task graph, for instance, at the different IPUs where tasks are either scheduled, or where precursor tasks have already been dispatched. Alternatively, a task-graph wait-state can be defined at application level, and a task graph system (such as with Intel Threading Building Blocks, TensorFlow, Graph scheduler middleware, etc.). Such a task graph system can be used to aggregate this wait-state across all machinery (IPUs, CPUs, XPUs, etc).
  • Second, operation 1020 is performed to determine a wait time relative to a threshold. A threshold, as used herein, may refer to a fixed value, a calculated value, a range of values, or other comparable data item. The determination of operation 1020 monitors for when an aggregated wait time penalty rises and crosses a defined threshold (e.g., a weighted and/or normalized threshold). In an example, this defined threshold may be weighted by the number of completed tasks that are waiting for the remaining tasks at a join point in a task graph, and in a further example, this defined threshold may be normalized by the time that takes for the first or the second task to reach the join point. The use of a weighted and normalized threshold is shown in the threshold evaluation 1130. As an example, consider a scenario where a threshold is used to trigger a type of QoS “alarm”. This threshold can be triggered if, for example, the total aggregated time for the tasks is X for a workload and the current amount of waiting time for the task that is blocking the other tasks is >0.1*X, where the threshold is set to 0.10 (10 percent).
  • When the threshold is crossed, a remediation may be initiated, including by selecting one or more tasks (or, groups or types of applicable tasks) and then expediting their execution. In general, this evaluation of the threshold may be performed by a software orchestrator, scheduler, or the like. Such evaluation operations may run on a CPU, or may be offloaded to one or more IPUs which can locally aggregate the wait times coming in from other IPUs.
  • Third, operation 1030 is performed to identify characteristics of the particular task(s) for remediation, thus identifying what aspect of the task execution is problematic. Specifically, the system can identify which executing task or tasks (or, groups or types of tasks) should be remediated so that they can be completed sooner (versus taking no action) (e.g., as in operation 1131). The identification of what task to remedy can be performed using various approaches.
  • In a first approach, this includes identifying the task(s) that have not yet reached the join point, and from these tasks, identifying those tasks that have exhibited the slowest progress. Progress may be measured by such rates as #instructions-retired/wall-clock-time, #system-calls-performed/wall-clock-time, #network-messages-sent-and-received/wall-clock-time.
  • In a second approach, an application cohort or a model can provide estimates for a count of monitorable operations of some type, and application telemetry can provide the fraction of those operations that have completed at a given point in time. Further, this may allow selection of those task or tasks whose fraction of operations completed is among the “K”—lowest fractions at a given time. Here “K” may be a specified static fraction, or a recommendation formula for “K” may specify K as a function of how much weighted wait-time has accumulated.
  • Fourth, at operation 1040, as a result of the evaluations above, one or more “laggard” tasks are identified for remediation, and a resolution (a remediation) is identified. A laggard task (e.g., identified in operation 1132), as used herein, is a task that according to one of the approaches in operation 1030, which needs to increase its pace of execution in order to reach the join point sooner that it would otherwise take.
  • Fifth, at operation 1050, various approaches for applying remediation (e.g., by expediting the laggard task(s)) are applied. After having identified one or more tasks which need to be expedited, the remediation is performed (e.g., expediting execution as in operation 1133). Remediation may be accomplished by one of several alternatives. For instance, remediation may be performed by use of one or more of the following approaches.
  • FBI—Fallback Infrastructure. FBI refers to a small portion of high performance infrastructure that is allocated on an ad-hoc basis for short durations, to jobs or tasks that need to run with fewest possible bandwidth or power constraints. Jobs or tasks all assigned to this infrastructure only if they are identified (e.g., in operation 1040), and are movable to such infrastructure. Such fallback infrastructure may be temporarily or permanently used, and logic may be used to determine how to reverse the use of the fallback infrastructure.
  • DEA—Deferred execution arrangement(s). DEA refers to arrangements that apply to a category of tasks that have no forward dependencies on their completion times. Jobs or tasks on which no other jobs or tasks depend are also referred to as singletons. Under these arrangements, such tasks can be run whenever FBI (as above) resources are underutilized, including during times where such resources are entirely idle. Similarly, such jobs can be de-scheduled or suspended (pre-empted) for durations of time even when they are not running on ordinary (non-FBI infrastructure).
  • CIA—Compute-if-available jobs. CIA jobs refers to a set of jobs that can be subject to DEA for defined periods of time. Frequently these may be jobs that are submitted and run as batch-jobs, and while they may have overall SLAs associated with them, the amount of slack that is available within those SLAs make it possible for such jobs to have long durations of time during which they can be run on a preemptable basis.
  • HCR—Hardware-assisted checkpointing/resume. HCR refers to a capability engineered into IPUs to perform background checkpointing of jobs (tasks) so that they can be efficiently migrated (without burdening CPUs) from execution into a suspended state at a given execution host, or which can be reactivated by one or more IPUs from being in a suspended state at one execution host to a ready-to-run state (i.e., resumed) at the same or a different execution host.
  • To achieve remediation, HCR is enabled and used for task duplication at FBI resources, in parallel with the ongoing execution. As noted above, FBI resources are pre-reserved for prioritized allocation. While FBI resources are not being used for dealing with tail latency outliers, it is furnished for low-priority/best-effort/preemptable execution of other singleton (such as CIA tasks) under Deferred-execution-arrangements (DEA). These can be registered ahead of time and funneled by the IPUs to the FBI.
  • In an example, when a job or task is assigned to the FBI, the job or task may be classified under one of four categories.
  • Category one: a first category is a singleton task that just absorbs surplus utilization at an FBI host—instead of such time or power being wasted. It is capable of being preempted for arbitrary durations, and may also be transferred out of FBI and assigned by an IPU to any other normal execution host that has low utilization.
  • Category two: a second category is a singleton task that should not be preempted ordinarily, but may be asked to yield; and it yields within a short, well-defined interval so that it can minimize the amount of state that needs to be saved or restored. In general, because such tasks require very little state to be saved or restored when yielding voluntarily, they may be preferred over ordinary CIA tasks at the FBI nodes.
  • Category three: a third category is priority tasks that are preemptable: tasks may be temporarily assigned to FBI because they need to be sped up (e.g., as discussed in operations 1030, 1040, above). However, barring a few exceptions, a priority task may be preemptable or may be capable of yielding. Such tasks may be preempted when even they need to be de-scheduled temporarily in order to make room for tasks that have even higher priority, or as determined by a scheduler which may time-share the fall back infrastructure across a limited number of equally high priority tasks.
  • Category four: a fourth category is priority tasks that are non-preemptable: Certain tasks that are particularly heavy in their computation demands, but are overall rare in occurrence may be permanently assigned to FBI to ensure that they can complete in the minimum duration of time possible. The identification of such tasks may be based not on instantaneous selection (e.g., in operations 1030, 1040, above), but on the basis of long-term (historical) observations about their past performance on non-FBI infrastructure. Alternatively, the tasks may be identified by an entity that creates a task graph or estimates task execution times based on various parameters such as the volume or velocity of inputs.
  • Once remediation has been effected (e.g., in operation 1050), the next downstream task from such a laggard task (straggler), when launched, implicitly begins to be tracked for up to a short time. If the rate of progress is high, then the remediation can be identified as complete. If the rate of progress is low, then another remediation or a variation of the remediation may be applied (including, repeating the remediation steps above).
  • A distinction can be made between the present remediation strategy and the discipline of earliest deadline first (EDF) scheduling. EDF is used primarily on a single node (host) to schedule jobs/tasks with the objective of achieving low latency response times, and is complementary to this proposal as follows. The actions proposed above (e.g., in operations 1010-1040) may precede the application of EDF by selecting an artificially low (near-term) deadline for the identified (to-be-expedited) tasks, without migrating the identified tasks to FBI. However, the amount of infrastructure capacity at any given node may not be sufficient to expedite a task-to-remedied locally itself. In this scenario, HCR can be used to automatically migrate such a task to FBI to migrate some other contending task of high priority to the FBI (e.g., so that the task in question can be given a larger slice of local resources).
  • FIG. 12 provides a flowchart of an example method 1200 for optimizing concurrent execution of workload tasks, in a distributed computing environment. The method 1200 may be implemented by one or more networked processing units, and instructions embodied thereon to be executed by the networked processing unit(s), consistent with the examples of networked processing units (e.g., IPUs), as discussed above. Further, these operations may be performed in a scenario where multiple tasks of a workload are distributed among multiple compute locations of the distributed computing environment.
  • At 1210, operations are performed to identify multiple tasks and processing dependencies within a computing workload. For example, the operations may include identifying and tracking multiple tasks of a computing workload that includes processing dependencies among the tasks, in a scenario where two or more of the tasks are executed concurrently. In a further example, identifying the multiple tasks of the workload includes splitting the workload into the multiple tasks.
  • At 1220, operations are performed to monitor or evaluate execution time for each task of the computing workload. In an example, the execution time is monitored or tracked relative to a respective execution time threshold applicable for each of the tasks (or, for a group of the tasks).
  • At 1230, operations are performed to calculate the execution threshold for each task of the computing workload (including, repeating the calculation and monitoring on an ongoing basis, relative to the execution threshold). In an example, calculating the execution time threshold for the particular task includes weighing an execution time threshold by an amount of waiting time elapsed for at least one completed task to reach the join point (and, wait for the particular task). In another example, the threshold is calculated for certain types or groups of tasks. Other examples for weighting and normalizing values relative to a threshold are discussed above.
  • At 1240, operations are performed to identify the execution time of a particular task as exceeding an execution time threshold for the particular task. In an example, this particular task provides an input to a dependent task that is a join point of the workload. For instance, the dependent task may receive a control input or data input from the particular task and at least one previous task of the workload. In an example, a remediation (as discussed below) is identified and applied based on determining that the dependent task is a joint point of the workload.
  • At 1250, operations are performed to determine a remediation based on the particular task and the identified execution time (e.g., based on the identified execution time meeting, exceeding, or crossing some threshold value that is dynamically calculated for the task as in 1240). In a specific example, the remediation includes use of fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time.
  • Consistent with the examples above, the use of the fallback compute infrastructure may include use of hardware-assisted resumption, to temporarily (or, permanently) migrate the particular task from a first compute location to a second compute location in the distributed computing environment. The use of the fallback compute infrastructure may also or alternatively include the use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, with the use of the deferred execution arrangement being coordinated during underutilization of the fallback compute infrastructure. The use of the fallback compute infrastructure may also or alternatively include (or be based on) a classification of the remediation, with such classification provided from among a plurality of priority categories according to the particular task.
  • At 1260, operations are performed to apply the remediation (alternately, to instantiate the remediation, to invoke the remediation, or to cause the remediation to be applied), and to monitor the results of the remediation, such as to increase the speed of execution of the workload. In some examples, other types of remediation may be applied which are not directly time-based (e.g., to reduce costs to a client, to distribute resources, or to reduce overall network latency). In specific examples, the method 1200 is performed by a first networked processing unit operating as an orchestrator or scheduler of the workload, and the remediation for the particular task is implemented with use of a second networked processing unit (e.g., coordinated or under the control of the first networked processing unit). Thus, the particular task being remediated may be executed by a first set of compute resources, while the remediation includes use of a second set of compute resources associated with the second networked processing unit.
  • It will be understood that the operations of method 1200 (and other workload monitoring methods discussed herein) may be executed in a variety of situations and conditions, including continuously in some examples. For instance, continuous execution of the method 1000 may enable a mesh network to continuously adapts to changing network and compute node conditions. In other examples, the execution of the method 1000 may be triggered or controlled by other scenarios and situations.
  • Likewise, the techniques of the method 1200 may be enhanced by a variety of other data and telemetry analysis operations not directly illustrated in the flowchart of FIG. 12 . For instance, artificial intelligence or other trained data analysis techniques may be used to identify trends, windows, thresholds, or active/inactive periods, for planning and orchestration and remediation actions as discussed above.
  • Extensions to Task Monitoring and Remediation
  • The following addresses a number of extensions to the approaches discussed above. It will be understood that some of the following extensions may be combined or modified, depending on the capabilities or use cases for concurrent task monitoring.
  • A first extension relates to the use of a task-graph registration interface. A task-graph registration interface may be used to identify join points by an application or application-cohort. This interface may be used for task graphs in which a common task is downstream of multiple other tasks (such as task
    Figure US20230136612A1-20230504-P00066
    that is downstream of tasks
    Figure US20230136612A1-20230504-P00067
    ,
    Figure US20230136612A1-20230504-P00068
    , and
    Figure US20230136612A1-20230504-P00069
    in FIGS. 9 and 10 ) but where that common task is wait-free. The common task can be initiated as soon as one task—or, a proper subset of tasks preceding it—complete. Such tasks may be explicitly identified as non-join points.
  • A second extension relates to the use of the previous extension task-graph registration interface, with an application or application-cohort providing (e.g., registering) a model by which the mean and variance of task completion times can be estimated. For instance, this enhanced registration interface may estimate times on a reference CPU (or XPU, i.e., other type of processing unit) with various simplifying assumptions such as, the assumption of no-noisy neighbors, no network congestion, etc. These estimated completion times provide a way of predicting reasonable completion times for various tasks on a given infrastructure, and enable a comparison with other completed tasks (e.g., how much they deviated from estimated times).
  • A third extension relates to a call-back interface. In an example, this call-back interface enables an orchestrator or a scheduler to call into an application or an application cohort, in order to obtain application telemetry. Such telemetry may provide an application's estimate of the progress rate of a task in the application (i.e., an interface to indicate how much progress or processing has been accomplished).
  • A fourth extension relates to an interface to pre-reserve time durations on FBI nodes. This interface can be used to perform pre-scheduled
  • HCR migrations if they become necessary (in some scenarios, only if they become necessary).
  • A fifth extension provides additional forms of remediation, such as include task-internal self-remediation. In one example, self-remediation may include an ability to perform a task or tasks at a lower accuracy so that it can deliver an approximate result (e.g., lower precision, or higher statistical variance). In another example, self-remediation may include an ability to return a less recently updated statistic, as the substitution of a time-series projected estimate of a result that is repeatedly or periodically computed. For example, in many recommendation algorithms, the exploration space for producing recommendations may be very large, while a stratified sample across the recommendations may be exponentially smaller in size, and still meet a high enough quality goal for producing top-N recommendations for small enough N. A coordinator IPU may provide a task-graph interface by which the implementation of a task in a task-graph can request progressive updates for wait times and accordingly tailor its result computation strategy.
  • A sixth extension provides remediation that enables more than one type of specialization for a fallback infrastructure. Thus, remediation is not limited to high throughput/low-latency compute, but also to different types of fallback infrastructure for different combinations of performance characteristics. Some of the relevant performance characteristics that are considered may include: CPU speed, cache sizes, memory capacities, memory and I/O bandwidths, GPU capabilities, and the like.
  • A seventh extension provides the broadcast of a back-pressure signal as FBI resources saturate. This enables task graphs with larger numbers of aggregate numbers of joins (e.g., weighted sum of join points with weights proportional to in-degrees) to receive higher priorities and earlier scheduling than others.
  • Additional Examples
  • Additional examples of the presently described method, system, and device embodiments include the following, non-limiting implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.
  • Example 1 is a method for task management of a workload in a distributed computing environment, comprising: identifying multiple tasks of a computing workload, wherein the workload includes processing dependencies among the tasks, and wherein two or more of the tasks are executed concurrently; monitoring or evaluating an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks; identifying the execution time of a particular task as exceeding an execution time threshold for the particular task; determining a remediation based on the particular task and the identified execution time, the remediation including use of other compute resources in the distributed computing environment; and applying or causing the remediation to increase speed of execution of the workload.
  • In Example 2, the subject matter of Example 1 optionally includes wherein the particular task provides an input to a dependent task, and wherein the dependent task is a join point of the workload that receives a control input or data input from the particular task and at least one previous task of the workload.
  • In Example 3, the subject matter of Example 2 optionally includes wherein the remediation is applied in response to determining that the dependent task is a join point of the workload.
  • In Example 4, the subject matter of any one or more of Examples 2-3 optionally include the method further comprising: calculating the execution time threshold for the particular task, wherein the execution time threshold is weighted by an amount of waiting time elapsed for at least one completed task to reach the join point and wait for the particular task.
  • In Example 5, the subject matter of any one or more of Examples 1-4 optionally include wherein identifying the multiple tasks of the workload comprises splitting the workload into the multiple tasks, and wherein the method further comprises distributing the multiple tasks among multiple compute locations of the distributed computing environment.
  • In Example 6, the subject matter of any one or more of Examples 1-5 optionally include wherein the remediation includes use of fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time.
  • In Example 7, the subject matter of Example 6 optionally includes wherein the use of the fallback compute infrastructure includes use of hardware-assisted resumption, to migrate the particular task from a first compute location to a second compute location in the distributed computing environment.
  • In Example 8, the subject matter of any one or more of Examples 6-7 optionally include wherein the use of the fallback compute infrastructure includes use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, and wherein the use of the deferred execution arrangement is coordinated during underutilization of the fallback compute infrastructure.
  • In Example 9, the subject matter of any one or more of Examples 6-8 optionally include wherein the use of the fallback compute infrastructure is based on a classification of the remediation, the classification provided from among a plurality of priority categories according to the particular task.
  • In Example 10, the subject matter of any one or more of Examples 1-9 optionally include wherein the method is performed by a first networked processing unit operating as an orchestrator or scheduler of the workload, and wherein the remediation for the particular task is implemented with use of a second networked processing unit.
  • In Example 11, the subject matter of Example 10 optionally includes wherein the particular task is executed by a first set of compute resources, and wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit.
  • Example 12 is a device, comprising: a networked processing unit, the networked processing unit connected to a distributed computing environment via a network; and a storage medium including instructions embodied thereon, wherein the instructions, which when executed by the networked processing unit, configure the networked processing unit to: identify multiple tasks of a computing workload, wherein the workload includes processing dependencies among the tasks, and wherein two or more of the tasks are executed concurrently; monitor or evaluate an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks; identifying the execution time of a particular task as exceeding an execution time threshold for the particular task; determining a remediation based on the particular task and the identified execution time, the remediation including use of other compute resources in the distributed computing environment; and applying or causing the remediation to increase speed of execution of the workload.
  • In Example 13, the subject matter of Example 12 optionally includes wherein the particular task provides an input to a dependent task, and wherein the dependent task is a join point of the workload that receives a control input or data input from the particular task and at least one previous task of the workload.
  • In Example 14, the subject matter of Example 13 optionally includes wherein the remediation is applied in response to determining that the dependent task is a join point of the workload.
  • In Example 15, the subject matter of any one or more of Examples 13-14 optionally include the instructions further to configure the networked processing unit to: calculate the execution time threshold for the particular task, wherein the execution time threshold is weighted by an amount of waiting time elapsed for at least one completed task to reach the join point and wait for the particular task.
  • In Example 16, the subject matter of any one or more of Examples 12-15 optionally include wherein to identify the multiple tasks of the workload is performed in response to splitting the workload into the multiple tasks, and wherein the instructions further configure the networked processing unit to cause the multiple tasks to be distributed among multiple compute locations of the distributed computing environment.
  • In Example 17, the subject matter of any one or more of Examples 12-16 optionally include wherein the remediation includes causing fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time.
  • In Example 18, the subject matter of Example 17 optionally includes wherein use of the fallback compute infrastructure includes use of hardware-assisted resumption, to migrate the particular task from a first compute location to a second compute location in the distributed computing environment.
  • In Example 19, the subject matter of any one or more of Examples 17-18 optionally include wherein use of the fallback compute infrastructure includes use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, and wherein the use of the deferred execution arrangement is coordinated during underutilization of the fallback compute infrastructure.
  • In Example 20, the subject matter of any one or more of Examples 17-19 optionally include wherein use of the fallback compute infrastructure is based on a classification of the remediation, the classification provided from among a plurality of priority categories according to the particular task.
  • In Example 21, the subject matter of any one or more of Examples 12-20 optionally include wherein the networked processing unit operates as an orchestrator or scheduler of the workload, and wherein the remediation for the particular task is implemented with use of a second networked processing unit connected via the network.
  • In Example 22, the subject matter of Example 21 optionally includes wherein the particular task is executed by a first set of compute resources associated with the device, and wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit.
  • Example 23 is a machine-readable medium (e.g., a non-transitory storage medium) comprising information (e.g., data) representative of instructions, wherein the instructions, when executed by processing circuitry, cause the processing circuitry to perform, implement, or deploy any of Examples 1-22.
  • Example 24 is an apparatus of an edge computing system comprising means to implement any of Examples 1-23, or other subject matter described herein.
  • Example 25 is an apparatus of an edge computing system comprising logic, modules, circuitry, or other means to implement any of Examples 1-23, or other subject matter described herein.
  • Example 26 is a networked processing unit (e.g., an infrastructure processing unit as discussed here) or system including a networked processing unit, configured to implement any of Examples 1-23, or other subject matter described herein.
  • Example 27 is an edge computing system, including respective edge processing devices and nodes to invoke or perform any of the operations of Examples 1-23, or other subject matter described herein.
  • Example 28 is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of any Examples 1-23, or other subject matter described herein.
  • Example 29 is a system to implement any of Examples 1-28.
  • Example 30 is a method to implement any of Examples 1-28.
  • Although these implementations have been described concerning specific exemplary aspects, it will be evident that various modifications and changes may be made to these aspects without departing from the broader scope of the present disclosure. Many of the arrangements and processes described herein can be used in combination or in parallel implementations that involve terrestrial network connectivity (where available) to increase network bandwidth/throughput and to support additional edge services. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof show, by way of illustration, and not of limitation, specific aspects in which the subject matter may be practiced. The aspects illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other aspects may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various aspects is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
  • Such aspects of the inventive subject matter may be referred to herein, individually and/or collectively, merely for convenience and without intending to voluntarily limit the scope of this application to any single aspect or inventive concept if more than one is disclosed. Thus, although specific aspects have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific aspects shown. This disclosure is intended to cover any adaptations or variations of various aspects. Combinations of the above aspects and other aspects not specifically described herein will be apparent to those of skill in the art upon reviewing the above description.

Claims (25)

What is claimed is:
1. A method for task management of a workload in a distributed computing environment, comprising:
identifying multiple tasks of a computing workload, wherein the workload includes processing dependencies among the tasks, and wherein two or more of the tasks are executed concurrently;
monitoring an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks;
identifying the execution time of a particular task as exceeding an execution time threshold for the particular task;
determining a remediation based on the particular task and the identified execution time, the remediation including use of other compute resources in the distributed computing environment; and
applying the remediation to increase speed of execution of the workload.
2. The method of claim 1, wherein the particular task provides an input to a dependent task, and wherein the dependent task is a join point of the workload that receives a control input or data input from the particular task and at least one previous task of the workload.
3. The method of claim 2, wherein the remediation is applied in response to determining that the dependent task is a join point of the workload.
4. The method of claim 2, the method further comprising:
calculating the execution time threshold for the particular task, wherein the execution time threshold is weighted by an amount of waiting time elapsed for at least one completed task to reach the join point and wait for the particular task.
5. The method of claim 1, wherein identifying the multiple tasks of the workload comprises splitting the workload into the multiple tasks, and wherein the method further comprises distributing the multiple tasks among multiple compute locations of the distributed computing environment.
6. The method of claim 1, wherein the remediation includes use of fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time.
7. The method of claim 6, wherein the use of the fallback compute infrastructure includes use of hardware-assisted resumption, to migrate the particular task from a first compute location to a second compute location in the distributed computing environment.
8. The method of claim 6, wherein the use of the fallback compute infrastructure includes use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, and wherein the use of the deferred execution arrangement is coordinated during underutilization of the fallback compute infrastructure.
9. The method of claim 6, wherein the use of the fallback compute infrastructure is based on a classification of the remediation, the classification provided from among a plurality of priority categories according to the particular task.
10. The method of claim 1, wherein the method is performed by a first networked processing unit operating as an orchestrator or scheduler of the workload, and wherein the remediation for the particular task is implemented with use of a second networked processing unit.
11. The method of claim 10, wherein the particular task is executed by a first set of compute resources, and wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit.
12. A device, comprising:
a networked processing unit, the networked processing unit connected to a distributed computing environment via a network; and
a storage medium including instructions embodied thereon, wherein the instructions, which when executed by the networked processing unit, configure the networked processing unit to:
identify multiple tasks of a computing workload, wherein the workload includes processing dependencies among the tasks, and wherein two or more of the tasks are executed concurrently;
monitor an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks;
identifying the execution time of a particular task as exceeding an execution time threshold for the particular task;
determining a remediation based on the particular task and the identified execution time, the remediation including use of other compute resources in the distributed computing environment; and
applying the remediation to increase speed of execution of the workload.
13. The device of claim 12, wherein the particular task provides an input to a dependent task, and wherein the dependent task is a join point of the workload that receives a control input or data input from the particular task and at least one previous task of the workload.
14. The device of claim 13, wherein the remediation is applied in response to determining that the dependent task is a join point of the workload.
15. The device of claim 13, the instructions further to configure the networked processing unit to:
calculate the execution time threshold for the particular task, wherein the execution time threshold is weighted by an amount of waiting time elapsed for at least one completed task to reach the join point and wait for the particular task.
16. The device of claim 12, wherein to identify the multiple tasks of the workload is performed in response to splitting the workload into the multiple tasks, and wherein the instructions further configure the networked processing unit to cause the multiple tasks to be distributed among multiple compute locations of the distributed computing environment.
17. The device of claim 12, wherein the remediation includes causing fallback compute infrastructure to perform at least a portion of the workload for at least a defined period of time.
18. The device of claim 17, wherein use of the fallback compute infrastructure includes use of hardware-assisted resumption, to migrate the particular task from a first compute location to a second compute location in the distributed computing environment.
19. The device of claim 17, wherein use of the fallback compute infrastructure includes use of a deferred execution arrangement for at least one task in the workload that does not have dependencies, and wherein the use of the deferred execution arrangement is coordinated during underutilization of the fallback compute infrastructure.
20. The device of claim 17, wherein use of the fallback compute infrastructure is based on a classification of the remediation, the classification provided from among a plurality of priority categories according to the particular task.
21. The device of claim 12, wherein the networked processing unit operates as an orchestrator or scheduler of the workload, and wherein the remediation for the particular task is implemented with use of a second networked processing unit connected via the network.
22. The device of claim 21, wherein the particular task is executed by a first set of compute resources associated with the device, and wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit.
23. A non-transitory machine-readable storage medium comprising information representative of instructions, wherein the instructions, when executed by processing circuitry, cause the processing circuitry to:
identify multiple tasks of a computing workload in a distributed computing environment, wherein the workload includes processing dependencies among the tasks, and wherein two or more of the tasks are executed concurrently;
evaluate an execution time for each of the tasks, relative to a respective execution time threshold applicable for each of the tasks;
identify the execution time of a particular task as exceeding an execution time threshold for the particular task;
determine a remediation based on the particular task and the identified execution time, the remediation including use of other compute resources in the distributed computing environment; and
cause the remediation to be applied to increase speed of execution of the workload.
24. The non-transitory machine-readable storage medium of claim 23,
wherein the particular task provides an input to a dependent task,
wherein the dependent task is a join point of the workload that receives a control input or data input from the particular task and at least one previous task of the workload, and
wherein the remediation is applied in response to determining that the dependent task is a join point of the workload.
25. The non-transitory machine-readable storage medium of claim 23,
wherein the processing circuitry is a first networked processing unit operating as an orchestrator or scheduler of the workload,
wherein the remediation for the particular task is implemented with use of a second networked processing unit,
wherein the particular task is executed by a first set of compute resources, and
wherein the remediation includes use of a second set of compute resources associated with the second networked processing unit.
US18/090,749 2022-11-16 2022-12-29 Optimizing concurrent execution using networked processing units Pending US20230136612A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/090,749 US20230136612A1 (en) 2022-11-16 2022-12-29 Optimizing concurrent execution using networked processing units

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263425857P 2022-11-16 2022-11-16
US18/090,749 US20230136612A1 (en) 2022-11-16 2022-12-29 Optimizing concurrent execution using networked processing units

Publications (1)

Publication Number Publication Date
US20230136612A1 true US20230136612A1 (en) 2023-05-04

Family

ID=86144713

Family Applications (10)

Application Number Title Priority Date Filing Date
US18/090,653 Pending US20230133020A1 (en) 2022-11-16 2022-12-29 Accelerator or accelerated functions as a service using networked processing units
US18/090,813 Pending US20230135938A1 (en) 2022-11-16 2022-12-29 Service mesh switching
US18/090,764 Pending US20230135645A1 (en) 2022-11-16 2022-12-29 Management of workload processing using distributed networked processing units
US18/090,786 Pending US20230132992A1 (en) 2022-11-16 2022-12-29 Infrastructure-delegated orchestration backup using networked processing units
US18/090,686 Pending US20230136048A1 (en) 2022-11-16 2022-12-29 Federated distribution of computation and operations using networked processing units
US18/090,842 Pending US20230140252A1 (en) 2022-11-16 2022-12-29 Localized device attestation
US18/090,862 Pending US20230137879A1 (en) 2022-11-16 2022-12-29 In-flight incremental processing
US18/090,720 Pending US20230134683A1 (en) 2022-11-16 2022-12-29 Memory interleaving coordinated by networked processing units
US18/090,701 Pending US20230136615A1 (en) 2022-11-16 2022-12-29 Virtual pools and resources using distributed networked processing units
US18/090,749 Pending US20230136612A1 (en) 2022-11-16 2022-12-29 Optimizing concurrent execution using networked processing units

Family Applications Before (9)

Application Number Title Priority Date Filing Date
US18/090,653 Pending US20230133020A1 (en) 2022-11-16 2022-12-29 Accelerator or accelerated functions as a service using networked processing units
US18/090,813 Pending US20230135938A1 (en) 2022-11-16 2022-12-29 Service mesh switching
US18/090,764 Pending US20230135645A1 (en) 2022-11-16 2022-12-29 Management of workload processing using distributed networked processing units
US18/090,786 Pending US20230132992A1 (en) 2022-11-16 2022-12-29 Infrastructure-delegated orchestration backup using networked processing units
US18/090,686 Pending US20230136048A1 (en) 2022-11-16 2022-12-29 Federated distribution of computation and operations using networked processing units
US18/090,842 Pending US20230140252A1 (en) 2022-11-16 2022-12-29 Localized device attestation
US18/090,862 Pending US20230137879A1 (en) 2022-11-16 2022-12-29 In-flight incremental processing
US18/090,720 Pending US20230134683A1 (en) 2022-11-16 2022-12-29 Memory interleaving coordinated by networked processing units
US18/090,701 Pending US20230136615A1 (en) 2022-11-16 2022-12-29 Virtual pools and resources using distributed networked processing units

Country Status (1)

Country Link
US (10) US20230133020A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11770377B1 (en) * 2020-06-29 2023-09-26 Cyral Inc. Non-in line data monitoring and security services
CN116208669B (en) * 2023-04-28 2023-06-30 湖南大学 Intelligent lamp pole-based vehicle-mounted heterogeneous network collaborative task unloading method and system

Also Published As

Publication number Publication date
US20230134683A1 (en) 2023-05-04
US20230132992A1 (en) 2023-05-04
US20230133020A1 (en) 2023-05-04
US20230137879A1 (en) 2023-05-04
US20230140252A1 (en) 2023-05-04
US20230135645A1 (en) 2023-05-04
US20230136048A1 (en) 2023-05-04
US20230135938A1 (en) 2023-05-04
US20230136615A1 (en) 2023-05-04

Similar Documents

Publication Publication Date Title
EP3798833B1 (en) Methods, system, articles of manufacture, and apparatus to manage telemetry data in an edge environment
Luo et al. Container-based fog computing architecture and energy-balancing scheduling algorithm for energy IoT
Bittencourt et al. The internet of things, fog and cloud continuum: Integration and challenges
Zhou et al. Augmentation techniques for mobile cloud computing: A taxonomy, survey, and future directions
US20230136612A1 (en) Optimizing concurrent execution using networked processing units
CN113014415A (en) End-to-end quality of service in an edge computing environment
Mirmohseni et al. Using Markov learning utilization model for resource allocation in cloud of thing network
Nithya et al. SDCF: A software-defined cyber foraging framework for cloudlet environment
US20220121566A1 (en) Methods, systems, articles of manufacture and apparatus for network service management
US20230119552A1 (en) Resource management mechanisms for stateful serverless clusters in edge computing
US20220114032A1 (en) Infrastructure managed workload distribution
JP2023046248A (en) Metrics and security-based accelerator service rescheduling and auto-scaling using programmable network device
CN115525405A (en) Deformable computer vision assembly line
El-Barbary et al. A cloudlet architecture using mobile devices
Whaiduzzaman et al. Pefc: Performance enhancement framework for cloudlet in mobile cloud computing
US20230318932A1 (en) Methods and apparatus to direct transmission of data between network-connected devices
EP4203431A1 (en) Methods and apparatus for network interface device-based edge computing
Whaiduzzaman et al. Towards enhancing resource scarce cloudlet performance in mobile cloud computing
US11899526B2 (en) Methods, apparatus and articles of manufacture to perform service failover
Hadeed et al. Load balancing mechanism for edge-cloud-based priorities containers
Wang et al. Real-Time AI in Social Edge
WO2023115435A1 (en) Methods, systems, articles of manufacture and apparatus to estimate workload complexity
US20240126606A1 (en) Dynamic parallel processing in an edge computing system
US11874719B2 (en) Management of performance and power consumption of edge devices
US20240126565A1 (en) Offline processing of workloads stored in registries

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DOSHI, KSHITIJ ARUN;GUIM BERNAT, FRANCESC;SIGNING DATES FROM 20230112 TO 20230204;REEL/FRAME:062696/0759

STCT Information on status: administrative procedure adjustment

Free format text: PROSECUTION SUSPENDED